00:00:00.001 Started by upstream project "autotest-per-patch" build number 132342 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.115 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.116 The recommended git tool is: git 00:00:00.116 using credential 00000000-0000-0000-0000-000000000002 00:00:00.118 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.169 Fetching changes from the remote Git repository 00:00:00.171 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.220 Using shallow fetch with depth 1 00:00:00.220 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.220 > git --version # timeout=10 00:00:00.258 > git --version # 'git version 2.39.2' 00:00:00.258 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.282 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.282 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.384 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.394 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.404 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.404 > git config core.sparsecheckout # timeout=10 00:00:07.414 > git read-tree -mu HEAD # timeout=10 00:00:07.426 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.451 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.451 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.564 [Pipeline] Start of Pipeline 00:00:07.574 [Pipeline] library 00:00:07.575 Loading library shm_lib@master 00:00:07.576 Library shm_lib@master is cached. Copying from home. 00:00:07.588 [Pipeline] node 00:00:07.596 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.598 [Pipeline] { 00:00:07.607 [Pipeline] catchError 00:00:07.608 [Pipeline] { 00:00:07.616 [Pipeline] wrap 00:00:07.621 [Pipeline] { 00:00:07.626 [Pipeline] stage 00:00:07.626 [Pipeline] { (Prologue) 00:00:07.776 [Pipeline] sh 00:00:08.061 + logger -p user.info -t JENKINS-CI 00:00:08.078 [Pipeline] echo 00:00:08.079 Node: WFP8 00:00:08.088 [Pipeline] sh 00:00:08.388 [Pipeline] setCustomBuildProperty 00:00:08.401 [Pipeline] echo 00:00:08.403 Cleanup processes 00:00:08.408 [Pipeline] sh 00:00:08.692 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.692 912931 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.705 [Pipeline] sh 00:00:08.987 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.987 ++ grep -v 'sudo pgrep' 00:00:08.987 ++ awk '{print $1}' 00:00:08.987 + sudo kill -9 00:00:08.987 + true 00:00:09.005 [Pipeline] cleanWs 00:00:09.017 [WS-CLEANUP] Deleting project workspace... 00:00:09.017 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.023 [WS-CLEANUP] done 00:00:09.028 [Pipeline] setCustomBuildProperty 00:00:09.044 [Pipeline] sh 00:00:09.327 + sudo git config --global --replace-all safe.directory '*' 00:00:09.430 [Pipeline] httpRequest 00:00:09.822 [Pipeline] echo 00:00:09.824 Sorcerer 10.211.164.20 is alive 00:00:09.834 [Pipeline] retry 00:00:09.836 [Pipeline] { 00:00:09.851 [Pipeline] httpRequest 00:00:09.856 HttpMethod: GET 00:00:09.856 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.857 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.870 Response Code: HTTP/1.1 200 OK 00:00:09.870 Success: Status code 200 is in the accepted range: 200,404 00:00:09.871 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.082 [Pipeline] } 00:00:14.101 [Pipeline] // retry 00:00:14.108 [Pipeline] sh 00:00:14.390 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.407 [Pipeline] httpRequest 00:00:14.749 [Pipeline] echo 00:00:14.751 Sorcerer 10.211.164.20 is alive 00:00:14.760 [Pipeline] retry 00:00:14.762 [Pipeline] { 00:00:14.801 [Pipeline] httpRequest 00:00:14.817 HttpMethod: GET 00:00:14.821 URL: http://10.211.164.20/packages/spdk_6745f139b200563199b98ad5eb6bf424010a949d.tar.gz 00:00:14.824 Sending request to url: http://10.211.164.20/packages/spdk_6745f139b200563199b98ad5eb6bf424010a949d.tar.gz 00:00:14.839 Response Code: HTTP/1.1 200 OK 00:00:14.841 Success: Status code 200 is in the accepted range: 200,404 00:00:14.842 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_6745f139b200563199b98ad5eb6bf424010a949d.tar.gz 00:00:55.995 [Pipeline] } 00:00:56.015 [Pipeline] // retry 00:00:56.022 [Pipeline] sh 00:00:56.309 + tar --no-same-owner -xf spdk_6745f139b200563199b98ad5eb6bf424010a949d.tar.gz 00:00:58.860 [Pipeline] sh 00:00:59.146 + git -C spdk log --oneline -n5 00:00:59.146 6745f139b bdev: Relocate _bdev_memory_domain_io_get_buf_cb() close to _bdev_io_submit_ext() 00:00:59.146 866ba5ffe bdev: Factor out checking bounce buffer necessity into helper function 00:00:59.146 57b682926 bdev: Add spdk_dif_ctx and spdk_dif_error into spdk_bdev_io 00:00:59.146 3b58329b1 bdev: Use data_block_size for upper layer buffer if no_metadata is true 00:00:59.146 9b64b1304 bdev: Add APIs get metadata config via desc depending on hide_metadata option 00:00:59.158 [Pipeline] } 00:00:59.173 [Pipeline] // stage 00:00:59.183 [Pipeline] stage 00:00:59.186 [Pipeline] { (Prepare) 00:00:59.210 [Pipeline] writeFile 00:00:59.226 [Pipeline] sh 00:00:59.512 + logger -p user.info -t JENKINS-CI 00:00:59.525 [Pipeline] sh 00:00:59.810 + logger -p user.info -t JENKINS-CI 00:00:59.822 [Pipeline] sh 00:01:00.108 + cat autorun-spdk.conf 00:01:00.108 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:00.108 SPDK_TEST_NVMF=1 00:01:00.108 SPDK_TEST_NVME_CLI=1 00:01:00.108 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:00.108 SPDK_TEST_NVMF_NICS=e810 00:01:00.108 SPDK_TEST_VFIOUSER=1 00:01:00.108 SPDK_RUN_UBSAN=1 00:01:00.108 NET_TYPE=phy 00:01:00.116 RUN_NIGHTLY=0 00:01:00.121 [Pipeline] readFile 00:01:00.145 [Pipeline] withEnv 00:01:00.147 [Pipeline] { 00:01:00.161 [Pipeline] sh 00:01:00.449 + set -ex 00:01:00.449 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:00.449 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:00.449 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:00.449 ++ SPDK_TEST_NVMF=1 00:01:00.449 ++ SPDK_TEST_NVME_CLI=1 00:01:00.449 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:00.449 ++ SPDK_TEST_NVMF_NICS=e810 00:01:00.449 ++ SPDK_TEST_VFIOUSER=1 00:01:00.449 ++ SPDK_RUN_UBSAN=1 00:01:00.449 ++ NET_TYPE=phy 00:01:00.449 ++ RUN_NIGHTLY=0 00:01:00.449 + case $SPDK_TEST_NVMF_NICS in 00:01:00.449 + DRIVERS=ice 00:01:00.449 + [[ tcp == \r\d\m\a ]] 00:01:00.449 + [[ -n ice ]] 00:01:00.449 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:00.449 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:00.449 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:00.449 rmmod: ERROR: Module irdma is not currently loaded 00:01:00.449 rmmod: ERROR: Module i40iw is not currently loaded 00:01:00.449 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:00.449 + true 00:01:00.449 + for D in $DRIVERS 00:01:00.449 + sudo modprobe ice 00:01:00.449 + exit 0 00:01:00.464 [Pipeline] } 00:01:00.479 [Pipeline] // withEnv 00:01:00.484 [Pipeline] } 00:01:00.500 [Pipeline] // stage 00:01:00.510 [Pipeline] catchError 00:01:00.512 [Pipeline] { 00:01:00.525 [Pipeline] timeout 00:01:00.525 Timeout set to expire in 1 hr 0 min 00:01:00.527 [Pipeline] { 00:01:00.541 [Pipeline] stage 00:01:00.544 [Pipeline] { (Tests) 00:01:00.558 [Pipeline] sh 00:01:00.871 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:00.871 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:00.871 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:00.871 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:00.871 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:00.871 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:00.871 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:00.871 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:00.871 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:00.871 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:00.871 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:00.871 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:00.871 + source /etc/os-release 00:01:00.871 ++ NAME='Fedora Linux' 00:01:00.871 ++ VERSION='39 (Cloud Edition)' 00:01:00.871 ++ ID=fedora 00:01:00.871 ++ VERSION_ID=39 00:01:00.871 ++ VERSION_CODENAME= 00:01:00.871 ++ PLATFORM_ID=platform:f39 00:01:00.871 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:00.871 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:00.871 ++ LOGO=fedora-logo-icon 00:01:00.871 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:00.871 ++ HOME_URL=https://fedoraproject.org/ 00:01:00.871 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:00.871 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:00.871 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:00.871 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:00.871 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:00.871 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:00.871 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:00.872 ++ SUPPORT_END=2024-11-12 00:01:00.872 ++ VARIANT='Cloud Edition' 00:01:00.872 ++ VARIANT_ID=cloud 00:01:00.872 + uname -a 00:01:00.872 Linux spdk-wfp-08 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:00.872 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:03.413 Hugepages 00:01:03.413 node hugesize free / total 00:01:03.413 node0 1048576kB 0 / 0 00:01:03.413 node0 2048kB 0 / 0 00:01:03.413 node1 1048576kB 0 / 0 00:01:03.413 node1 2048kB 0 / 0 00:01:03.413 00:01:03.413 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:03.413 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:03.413 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:03.413 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:03.413 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:03.413 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:03.413 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:03.413 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:03.413 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:03.413 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:03.413 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:03.413 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:03.413 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:03.413 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:03.413 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:03.413 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:03.413 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:03.413 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:03.413 + rm -f /tmp/spdk-ld-path 00:01:03.413 + source autorun-spdk.conf 00:01:03.413 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:03.413 ++ SPDK_TEST_NVMF=1 00:01:03.413 ++ SPDK_TEST_NVME_CLI=1 00:01:03.413 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:03.413 ++ SPDK_TEST_NVMF_NICS=e810 00:01:03.413 ++ SPDK_TEST_VFIOUSER=1 00:01:03.413 ++ SPDK_RUN_UBSAN=1 00:01:03.413 ++ NET_TYPE=phy 00:01:03.413 ++ RUN_NIGHTLY=0 00:01:03.413 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:03.413 + [[ -n '' ]] 00:01:03.413 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:03.413 + for M in /var/spdk/build-*-manifest.txt 00:01:03.413 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:03.413 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:03.673 + for M in /var/spdk/build-*-manifest.txt 00:01:03.673 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:03.673 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:03.673 + for M in /var/spdk/build-*-manifest.txt 00:01:03.673 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:03.673 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:03.673 ++ uname 00:01:03.673 + [[ Linux == \L\i\n\u\x ]] 00:01:03.673 + sudo dmesg -T 00:01:03.673 + sudo dmesg --clear 00:01:03.673 + dmesg_pid=914043 00:01:03.673 + [[ Fedora Linux == FreeBSD ]] 00:01:03.673 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:03.673 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:03.673 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:03.673 + [[ -x /usr/src/fio-static/fio ]] 00:01:03.673 + export FIO_BIN=/usr/src/fio-static/fio 00:01:03.673 + FIO_BIN=/usr/src/fio-static/fio 00:01:03.673 + sudo dmesg -Tw 00:01:03.673 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:03.673 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:03.673 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:03.673 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:03.673 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:03.673 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:03.673 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:03.673 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:03.673 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:03.673 06:57:08 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:03.673 06:57:08 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:03.673 06:57:08 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:03.673 06:57:08 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:03.673 06:57:08 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:03.673 06:57:08 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:03.673 06:57:08 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:03.674 06:57:08 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:03.674 06:57:08 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:03.674 06:57:08 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:03.674 06:57:08 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:03.674 06:57:08 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:03.674 06:57:08 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:03.674 06:57:08 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:03.674 06:57:08 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:03.674 06:57:08 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:03.674 06:57:08 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:03.674 06:57:08 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:03.674 06:57:08 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:03.674 06:57:08 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:03.674 06:57:08 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:03.674 06:57:08 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:03.674 06:57:08 -- paths/export.sh@5 -- $ export PATH 00:01:03.674 06:57:08 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:03.674 06:57:08 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:03.674 06:57:08 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:03.674 06:57:08 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1732082228.XXXXXX 00:01:03.674 06:57:08 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1732082228.YkJ39M 00:01:03.674 06:57:08 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:03.674 06:57:08 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:03.674 06:57:08 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:03.674 06:57:08 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:03.674 06:57:08 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:03.674 06:57:08 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:03.674 06:57:08 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:03.674 06:57:08 -- common/autotest_common.sh@10 -- $ set +x 00:01:03.934 06:57:08 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:03.934 06:57:08 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:03.934 06:57:08 -- pm/common@17 -- $ local monitor 00:01:03.934 06:57:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:03.934 06:57:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:03.934 06:57:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:03.934 06:57:08 -- pm/common@21 -- $ date +%s 00:01:03.934 06:57:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:03.934 06:57:08 -- pm/common@21 -- $ date +%s 00:01:03.934 06:57:08 -- pm/common@25 -- $ sleep 1 00:01:03.934 06:57:08 -- pm/common@21 -- $ date +%s 00:01:03.934 06:57:08 -- pm/common@21 -- $ date +%s 00:01:03.934 06:57:08 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732082228 00:01:03.934 06:57:08 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732082228 00:01:03.934 06:57:08 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732082228 00:01:03.934 06:57:08 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732082228 00:01:03.934 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732082228_collect-vmstat.pm.log 00:01:03.934 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732082228_collect-cpu-load.pm.log 00:01:03.934 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732082228_collect-cpu-temp.pm.log 00:01:03.934 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732082228_collect-bmc-pm.bmc.pm.log 00:01:04.874 06:57:09 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:04.874 06:57:09 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:04.874 06:57:09 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:04.874 06:57:09 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:04.874 06:57:09 -- spdk/autobuild.sh@16 -- $ date -u 00:01:04.874 Wed Nov 20 05:57:09 AM UTC 2024 00:01:04.874 06:57:09 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:04.874 v25.01-pre-194-g6745f139b 00:01:04.874 06:57:09 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:04.874 06:57:09 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:04.874 06:57:09 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:04.874 06:57:09 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:04.874 06:57:09 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:04.874 06:57:09 -- common/autotest_common.sh@10 -- $ set +x 00:01:04.874 ************************************ 00:01:04.874 START TEST ubsan 00:01:04.874 ************************************ 00:01:04.874 06:57:09 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:01:04.874 using ubsan 00:01:04.874 00:01:04.874 real 0m0.000s 00:01:04.874 user 0m0.000s 00:01:04.874 sys 0m0.000s 00:01:04.874 06:57:09 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:04.874 06:57:09 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:04.874 ************************************ 00:01:04.874 END TEST ubsan 00:01:04.874 ************************************ 00:01:04.874 06:57:09 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:04.874 06:57:09 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:04.874 06:57:09 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:04.874 06:57:09 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:04.874 06:57:09 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:04.874 06:57:09 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:04.874 06:57:09 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:04.874 06:57:09 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:04.874 06:57:09 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:05.133 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:05.134 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:05.393 Using 'verbs' RDMA provider 00:01:18.547 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:30.764 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:30.764 Creating mk/config.mk...done. 00:01:30.764 Creating mk/cc.flags.mk...done. 00:01:30.764 Type 'make' to build. 00:01:30.764 06:57:34 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:01:30.764 06:57:34 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:30.764 06:57:34 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:30.764 06:57:34 -- common/autotest_common.sh@10 -- $ set +x 00:01:30.764 ************************************ 00:01:30.764 START TEST make 00:01:30.764 ************************************ 00:01:30.764 06:57:34 make -- common/autotest_common.sh@1127 -- $ make -j96 00:01:30.764 make[1]: Nothing to be done for 'all'. 00:01:32.150 The Meson build system 00:01:32.150 Version: 1.5.0 00:01:32.150 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:32.150 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:32.150 Build type: native build 00:01:32.150 Project name: libvfio-user 00:01:32.150 Project version: 0.0.1 00:01:32.150 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:32.150 C linker for the host machine: cc ld.bfd 2.40-14 00:01:32.150 Host machine cpu family: x86_64 00:01:32.150 Host machine cpu: x86_64 00:01:32.150 Run-time dependency threads found: YES 00:01:32.150 Library dl found: YES 00:01:32.150 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:32.150 Run-time dependency json-c found: YES 0.17 00:01:32.150 Run-time dependency cmocka found: YES 1.1.7 00:01:32.150 Program pytest-3 found: NO 00:01:32.150 Program flake8 found: NO 00:01:32.150 Program misspell-fixer found: NO 00:01:32.150 Program restructuredtext-lint found: NO 00:01:32.150 Program valgrind found: YES (/usr/bin/valgrind) 00:01:32.150 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:32.150 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:32.150 Compiler for C supports arguments -Wwrite-strings: YES 00:01:32.150 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:32.150 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:32.150 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:32.150 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:32.150 Build targets in project: 8 00:01:32.150 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:32.150 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:32.150 00:01:32.150 libvfio-user 0.0.1 00:01:32.150 00:01:32.150 User defined options 00:01:32.150 buildtype : debug 00:01:32.150 default_library: shared 00:01:32.150 libdir : /usr/local/lib 00:01:32.150 00:01:32.150 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:32.799 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:32.799 [1/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:32.799 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:32.799 [3/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:32.799 [4/37] Compiling C object samples/null.p/null.c.o 00:01:32.799 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:32.799 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:32.799 [7/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:32.799 [8/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:32.799 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:32.799 [10/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:32.799 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:32.799 [12/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:32.799 [13/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:32.799 [14/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:32.799 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:32.799 [16/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:32.799 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:32.799 [18/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:33.112 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:33.112 [20/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:33.112 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:33.112 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:33.112 [23/37] Compiling C object samples/server.p/server.c.o 00:01:33.112 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:33.112 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:33.112 [26/37] Compiling C object samples/client.p/client.c.o 00:01:33.112 [27/37] Linking target samples/client 00:01:33.112 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:33.112 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:33.112 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:33.112 [31/37] Linking target test/unit_tests 00:01:33.112 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:33.112 [33/37] Linking target samples/gpio-pci-idio-16 00:01:33.112 [34/37] Linking target samples/server 00:01:33.112 [35/37] Linking target samples/shadow_ioeventfd_server 00:01:33.112 [36/37] Linking target samples/null 00:01:33.112 [37/37] Linking target samples/lspci 00:01:33.112 INFO: autodetecting backend as ninja 00:01:33.112 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:33.371 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:33.628 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:33.629 ninja: no work to do. 00:01:38.899 The Meson build system 00:01:38.899 Version: 1.5.0 00:01:38.899 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:38.899 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:38.899 Build type: native build 00:01:38.899 Program cat found: YES (/usr/bin/cat) 00:01:38.899 Project name: DPDK 00:01:38.899 Project version: 24.03.0 00:01:38.899 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:38.899 C linker for the host machine: cc ld.bfd 2.40-14 00:01:38.899 Host machine cpu family: x86_64 00:01:38.899 Host machine cpu: x86_64 00:01:38.899 Message: ## Building in Developer Mode ## 00:01:38.899 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:38.900 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:38.900 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:38.900 Program python3 found: YES (/usr/bin/python3) 00:01:38.900 Program cat found: YES (/usr/bin/cat) 00:01:38.900 Compiler for C supports arguments -march=native: YES 00:01:38.900 Checking for size of "void *" : 8 00:01:38.900 Checking for size of "void *" : 8 (cached) 00:01:38.900 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:38.900 Library m found: YES 00:01:38.900 Library numa found: YES 00:01:38.900 Has header "numaif.h" : YES 00:01:38.900 Library fdt found: NO 00:01:38.900 Library execinfo found: NO 00:01:38.900 Has header "execinfo.h" : YES 00:01:38.900 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:38.900 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:38.900 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:38.900 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:38.900 Run-time dependency openssl found: YES 3.1.1 00:01:38.900 Run-time dependency libpcap found: YES 1.10.4 00:01:38.900 Has header "pcap.h" with dependency libpcap: YES 00:01:38.900 Compiler for C supports arguments -Wcast-qual: YES 00:01:38.900 Compiler for C supports arguments -Wdeprecated: YES 00:01:38.900 Compiler for C supports arguments -Wformat: YES 00:01:38.900 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:38.900 Compiler for C supports arguments -Wformat-security: NO 00:01:38.900 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:38.900 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:38.900 Compiler for C supports arguments -Wnested-externs: YES 00:01:38.900 Compiler for C supports arguments -Wold-style-definition: YES 00:01:38.900 Compiler for C supports arguments -Wpointer-arith: YES 00:01:38.900 Compiler for C supports arguments -Wsign-compare: YES 00:01:38.900 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:38.900 Compiler for C supports arguments -Wundef: YES 00:01:38.900 Compiler for C supports arguments -Wwrite-strings: YES 00:01:38.900 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:38.900 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:38.900 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:38.900 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:38.900 Program objdump found: YES (/usr/bin/objdump) 00:01:38.900 Compiler for C supports arguments -mavx512f: YES 00:01:38.900 Checking if "AVX512 checking" compiles: YES 00:01:38.900 Fetching value of define "__SSE4_2__" : 1 00:01:38.900 Fetching value of define "__AES__" : 1 00:01:38.900 Fetching value of define "__AVX__" : 1 00:01:38.900 Fetching value of define "__AVX2__" : 1 00:01:38.900 Fetching value of define "__AVX512BW__" : 1 00:01:38.900 Fetching value of define "__AVX512CD__" : 1 00:01:38.900 Fetching value of define "__AVX512DQ__" : 1 00:01:38.900 Fetching value of define "__AVX512F__" : 1 00:01:38.900 Fetching value of define "__AVX512VL__" : 1 00:01:38.900 Fetching value of define "__PCLMUL__" : 1 00:01:38.900 Fetching value of define "__RDRND__" : 1 00:01:38.900 Fetching value of define "__RDSEED__" : 1 00:01:38.900 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:38.900 Fetching value of define "__znver1__" : (undefined) 00:01:38.900 Fetching value of define "__znver2__" : (undefined) 00:01:38.900 Fetching value of define "__znver3__" : (undefined) 00:01:38.900 Fetching value of define "__znver4__" : (undefined) 00:01:38.900 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:38.900 Message: lib/log: Defining dependency "log" 00:01:38.900 Message: lib/kvargs: Defining dependency "kvargs" 00:01:38.900 Message: lib/telemetry: Defining dependency "telemetry" 00:01:38.900 Checking for function "getentropy" : NO 00:01:38.900 Message: lib/eal: Defining dependency "eal" 00:01:38.900 Message: lib/ring: Defining dependency "ring" 00:01:38.900 Message: lib/rcu: Defining dependency "rcu" 00:01:38.900 Message: lib/mempool: Defining dependency "mempool" 00:01:38.900 Message: lib/mbuf: Defining dependency "mbuf" 00:01:38.900 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:38.900 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:38.900 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:38.900 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:38.900 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:38.900 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:38.900 Compiler for C supports arguments -mpclmul: YES 00:01:38.900 Compiler for C supports arguments -maes: YES 00:01:38.900 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:38.900 Compiler for C supports arguments -mavx512bw: YES 00:01:38.900 Compiler for C supports arguments -mavx512dq: YES 00:01:38.900 Compiler for C supports arguments -mavx512vl: YES 00:01:38.900 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:38.900 Compiler for C supports arguments -mavx2: YES 00:01:38.900 Compiler for C supports arguments -mavx: YES 00:01:38.900 Message: lib/net: Defining dependency "net" 00:01:38.900 Message: lib/meter: Defining dependency "meter" 00:01:38.900 Message: lib/ethdev: Defining dependency "ethdev" 00:01:38.900 Message: lib/pci: Defining dependency "pci" 00:01:38.900 Message: lib/cmdline: Defining dependency "cmdline" 00:01:38.900 Message: lib/hash: Defining dependency "hash" 00:01:38.900 Message: lib/timer: Defining dependency "timer" 00:01:38.900 Message: lib/compressdev: Defining dependency "compressdev" 00:01:38.900 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:38.900 Message: lib/dmadev: Defining dependency "dmadev" 00:01:38.900 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:38.900 Message: lib/power: Defining dependency "power" 00:01:38.900 Message: lib/reorder: Defining dependency "reorder" 00:01:38.900 Message: lib/security: Defining dependency "security" 00:01:38.900 Has header "linux/userfaultfd.h" : YES 00:01:38.900 Has header "linux/vduse.h" : YES 00:01:38.900 Message: lib/vhost: Defining dependency "vhost" 00:01:38.900 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:38.900 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:38.900 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:38.900 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:38.900 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:38.900 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:38.900 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:38.900 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:38.900 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:38.900 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:38.900 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:38.900 Configuring doxy-api-html.conf using configuration 00:01:38.900 Configuring doxy-api-man.conf using configuration 00:01:38.900 Program mandb found: YES (/usr/bin/mandb) 00:01:38.900 Program sphinx-build found: NO 00:01:38.900 Configuring rte_build_config.h using configuration 00:01:38.900 Message: 00:01:38.900 ================= 00:01:38.900 Applications Enabled 00:01:38.900 ================= 00:01:38.900 00:01:38.900 apps: 00:01:38.900 00:01:38.900 00:01:38.900 Message: 00:01:38.900 ================= 00:01:38.900 Libraries Enabled 00:01:38.900 ================= 00:01:38.900 00:01:38.900 libs: 00:01:38.900 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:38.900 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:38.900 cryptodev, dmadev, power, reorder, security, vhost, 00:01:38.900 00:01:38.900 Message: 00:01:38.900 =============== 00:01:38.900 Drivers Enabled 00:01:38.900 =============== 00:01:38.900 00:01:38.900 common: 00:01:38.900 00:01:38.900 bus: 00:01:38.900 pci, vdev, 00:01:38.900 mempool: 00:01:38.900 ring, 00:01:38.900 dma: 00:01:38.900 00:01:38.900 net: 00:01:38.900 00:01:38.900 crypto: 00:01:38.900 00:01:38.900 compress: 00:01:38.900 00:01:38.900 vdpa: 00:01:38.900 00:01:38.900 00:01:38.900 Message: 00:01:38.900 ================= 00:01:38.900 Content Skipped 00:01:38.900 ================= 00:01:38.900 00:01:38.900 apps: 00:01:38.900 dumpcap: explicitly disabled via build config 00:01:38.900 graph: explicitly disabled via build config 00:01:38.900 pdump: explicitly disabled via build config 00:01:38.900 proc-info: explicitly disabled via build config 00:01:38.900 test-acl: explicitly disabled via build config 00:01:38.900 test-bbdev: explicitly disabled via build config 00:01:38.900 test-cmdline: explicitly disabled via build config 00:01:38.900 test-compress-perf: explicitly disabled via build config 00:01:38.900 test-crypto-perf: explicitly disabled via build config 00:01:38.900 test-dma-perf: explicitly disabled via build config 00:01:38.900 test-eventdev: explicitly disabled via build config 00:01:38.900 test-fib: explicitly disabled via build config 00:01:38.900 test-flow-perf: explicitly disabled via build config 00:01:38.900 test-gpudev: explicitly disabled via build config 00:01:38.900 test-mldev: explicitly disabled via build config 00:01:38.900 test-pipeline: explicitly disabled via build config 00:01:38.900 test-pmd: explicitly disabled via build config 00:01:38.900 test-regex: explicitly disabled via build config 00:01:38.900 test-sad: explicitly disabled via build config 00:01:38.900 test-security-perf: explicitly disabled via build config 00:01:38.900 00:01:38.900 libs: 00:01:38.900 argparse: explicitly disabled via build config 00:01:38.900 metrics: explicitly disabled via build config 00:01:38.900 acl: explicitly disabled via build config 00:01:38.900 bbdev: explicitly disabled via build config 00:01:38.900 bitratestats: explicitly disabled via build config 00:01:38.900 bpf: explicitly disabled via build config 00:01:38.900 cfgfile: explicitly disabled via build config 00:01:38.900 distributor: explicitly disabled via build config 00:01:38.900 efd: explicitly disabled via build config 00:01:38.900 eventdev: explicitly disabled via build config 00:01:38.901 dispatcher: explicitly disabled via build config 00:01:38.901 gpudev: explicitly disabled via build config 00:01:38.901 gro: explicitly disabled via build config 00:01:38.901 gso: explicitly disabled via build config 00:01:38.901 ip_frag: explicitly disabled via build config 00:01:38.901 jobstats: explicitly disabled via build config 00:01:38.901 latencystats: explicitly disabled via build config 00:01:38.901 lpm: explicitly disabled via build config 00:01:38.901 member: explicitly disabled via build config 00:01:38.901 pcapng: explicitly disabled via build config 00:01:38.901 rawdev: explicitly disabled via build config 00:01:38.901 regexdev: explicitly disabled via build config 00:01:38.901 mldev: explicitly disabled via build config 00:01:38.901 rib: explicitly disabled via build config 00:01:38.901 sched: explicitly disabled via build config 00:01:38.901 stack: explicitly disabled via build config 00:01:38.901 ipsec: explicitly disabled via build config 00:01:38.901 pdcp: explicitly disabled via build config 00:01:38.901 fib: explicitly disabled via build config 00:01:38.901 port: explicitly disabled via build config 00:01:38.901 pdump: explicitly disabled via build config 00:01:38.901 table: explicitly disabled via build config 00:01:38.901 pipeline: explicitly disabled via build config 00:01:38.901 graph: explicitly disabled via build config 00:01:38.901 node: explicitly disabled via build config 00:01:38.901 00:01:38.901 drivers: 00:01:38.901 common/cpt: not in enabled drivers build config 00:01:38.901 common/dpaax: not in enabled drivers build config 00:01:38.901 common/iavf: not in enabled drivers build config 00:01:38.901 common/idpf: not in enabled drivers build config 00:01:38.901 common/ionic: not in enabled drivers build config 00:01:38.901 common/mvep: not in enabled drivers build config 00:01:38.901 common/octeontx: not in enabled drivers build config 00:01:38.901 bus/auxiliary: not in enabled drivers build config 00:01:38.901 bus/cdx: not in enabled drivers build config 00:01:38.901 bus/dpaa: not in enabled drivers build config 00:01:38.901 bus/fslmc: not in enabled drivers build config 00:01:38.901 bus/ifpga: not in enabled drivers build config 00:01:38.901 bus/platform: not in enabled drivers build config 00:01:38.901 bus/uacce: not in enabled drivers build config 00:01:38.901 bus/vmbus: not in enabled drivers build config 00:01:38.901 common/cnxk: not in enabled drivers build config 00:01:38.901 common/mlx5: not in enabled drivers build config 00:01:38.901 common/nfp: not in enabled drivers build config 00:01:38.901 common/nitrox: not in enabled drivers build config 00:01:38.901 common/qat: not in enabled drivers build config 00:01:38.901 common/sfc_efx: not in enabled drivers build config 00:01:38.901 mempool/bucket: not in enabled drivers build config 00:01:38.901 mempool/cnxk: not in enabled drivers build config 00:01:38.901 mempool/dpaa: not in enabled drivers build config 00:01:38.901 mempool/dpaa2: not in enabled drivers build config 00:01:38.901 mempool/octeontx: not in enabled drivers build config 00:01:38.901 mempool/stack: not in enabled drivers build config 00:01:38.901 dma/cnxk: not in enabled drivers build config 00:01:38.901 dma/dpaa: not in enabled drivers build config 00:01:38.901 dma/dpaa2: not in enabled drivers build config 00:01:38.901 dma/hisilicon: not in enabled drivers build config 00:01:38.901 dma/idxd: not in enabled drivers build config 00:01:38.901 dma/ioat: not in enabled drivers build config 00:01:38.901 dma/skeleton: not in enabled drivers build config 00:01:38.901 net/af_packet: not in enabled drivers build config 00:01:38.901 net/af_xdp: not in enabled drivers build config 00:01:38.901 net/ark: not in enabled drivers build config 00:01:38.901 net/atlantic: not in enabled drivers build config 00:01:38.901 net/avp: not in enabled drivers build config 00:01:38.901 net/axgbe: not in enabled drivers build config 00:01:38.901 net/bnx2x: not in enabled drivers build config 00:01:38.901 net/bnxt: not in enabled drivers build config 00:01:38.901 net/bonding: not in enabled drivers build config 00:01:38.901 net/cnxk: not in enabled drivers build config 00:01:38.901 net/cpfl: not in enabled drivers build config 00:01:38.901 net/cxgbe: not in enabled drivers build config 00:01:38.901 net/dpaa: not in enabled drivers build config 00:01:38.901 net/dpaa2: not in enabled drivers build config 00:01:38.901 net/e1000: not in enabled drivers build config 00:01:38.901 net/ena: not in enabled drivers build config 00:01:38.901 net/enetc: not in enabled drivers build config 00:01:38.901 net/enetfec: not in enabled drivers build config 00:01:38.901 net/enic: not in enabled drivers build config 00:01:38.901 net/failsafe: not in enabled drivers build config 00:01:38.901 net/fm10k: not in enabled drivers build config 00:01:38.901 net/gve: not in enabled drivers build config 00:01:38.901 net/hinic: not in enabled drivers build config 00:01:38.901 net/hns3: not in enabled drivers build config 00:01:38.901 net/i40e: not in enabled drivers build config 00:01:38.901 net/iavf: not in enabled drivers build config 00:01:38.901 net/ice: not in enabled drivers build config 00:01:38.901 net/idpf: not in enabled drivers build config 00:01:38.901 net/igc: not in enabled drivers build config 00:01:38.901 net/ionic: not in enabled drivers build config 00:01:38.901 net/ipn3ke: not in enabled drivers build config 00:01:38.901 net/ixgbe: not in enabled drivers build config 00:01:38.901 net/mana: not in enabled drivers build config 00:01:38.901 net/memif: not in enabled drivers build config 00:01:38.901 net/mlx4: not in enabled drivers build config 00:01:38.901 net/mlx5: not in enabled drivers build config 00:01:38.901 net/mvneta: not in enabled drivers build config 00:01:38.901 net/mvpp2: not in enabled drivers build config 00:01:38.901 net/netvsc: not in enabled drivers build config 00:01:38.901 net/nfb: not in enabled drivers build config 00:01:38.901 net/nfp: not in enabled drivers build config 00:01:38.901 net/ngbe: not in enabled drivers build config 00:01:38.901 net/null: not in enabled drivers build config 00:01:38.901 net/octeontx: not in enabled drivers build config 00:01:38.901 net/octeon_ep: not in enabled drivers build config 00:01:38.901 net/pcap: not in enabled drivers build config 00:01:38.901 net/pfe: not in enabled drivers build config 00:01:38.901 net/qede: not in enabled drivers build config 00:01:38.901 net/ring: not in enabled drivers build config 00:01:38.901 net/sfc: not in enabled drivers build config 00:01:38.901 net/softnic: not in enabled drivers build config 00:01:38.901 net/tap: not in enabled drivers build config 00:01:38.901 net/thunderx: not in enabled drivers build config 00:01:38.901 net/txgbe: not in enabled drivers build config 00:01:38.901 net/vdev_netvsc: not in enabled drivers build config 00:01:38.901 net/vhost: not in enabled drivers build config 00:01:38.901 net/virtio: not in enabled drivers build config 00:01:38.901 net/vmxnet3: not in enabled drivers build config 00:01:38.901 raw/*: missing internal dependency, "rawdev" 00:01:38.901 crypto/armv8: not in enabled drivers build config 00:01:38.901 crypto/bcmfs: not in enabled drivers build config 00:01:38.901 crypto/caam_jr: not in enabled drivers build config 00:01:38.901 crypto/ccp: not in enabled drivers build config 00:01:38.901 crypto/cnxk: not in enabled drivers build config 00:01:38.901 crypto/dpaa_sec: not in enabled drivers build config 00:01:38.901 crypto/dpaa2_sec: not in enabled drivers build config 00:01:38.901 crypto/ipsec_mb: not in enabled drivers build config 00:01:38.901 crypto/mlx5: not in enabled drivers build config 00:01:38.901 crypto/mvsam: not in enabled drivers build config 00:01:38.901 crypto/nitrox: not in enabled drivers build config 00:01:38.901 crypto/null: not in enabled drivers build config 00:01:38.901 crypto/octeontx: not in enabled drivers build config 00:01:38.901 crypto/openssl: not in enabled drivers build config 00:01:38.901 crypto/scheduler: not in enabled drivers build config 00:01:38.901 crypto/uadk: not in enabled drivers build config 00:01:38.901 crypto/virtio: not in enabled drivers build config 00:01:38.901 compress/isal: not in enabled drivers build config 00:01:38.901 compress/mlx5: not in enabled drivers build config 00:01:38.901 compress/nitrox: not in enabled drivers build config 00:01:38.901 compress/octeontx: not in enabled drivers build config 00:01:38.901 compress/zlib: not in enabled drivers build config 00:01:38.901 regex/*: missing internal dependency, "regexdev" 00:01:38.901 ml/*: missing internal dependency, "mldev" 00:01:38.901 vdpa/ifc: not in enabled drivers build config 00:01:38.901 vdpa/mlx5: not in enabled drivers build config 00:01:38.901 vdpa/nfp: not in enabled drivers build config 00:01:38.901 vdpa/sfc: not in enabled drivers build config 00:01:38.901 event/*: missing internal dependency, "eventdev" 00:01:38.901 baseband/*: missing internal dependency, "bbdev" 00:01:38.901 gpu/*: missing internal dependency, "gpudev" 00:01:38.901 00:01:38.901 00:01:38.901 Build targets in project: 85 00:01:38.901 00:01:38.901 DPDK 24.03.0 00:01:38.901 00:01:38.901 User defined options 00:01:38.901 buildtype : debug 00:01:38.901 default_library : shared 00:01:38.901 libdir : lib 00:01:38.901 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:38.901 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:38.901 c_link_args : 00:01:38.901 cpu_instruction_set: native 00:01:38.901 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:38.901 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:38.901 enable_docs : false 00:01:38.901 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:38.901 enable_kmods : false 00:01:38.901 max_lcores : 128 00:01:38.901 tests : false 00:01:38.901 00:01:38.901 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:39.481 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:39.481 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:39.481 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:39.481 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:39.481 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:39.481 [5/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:39.481 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:39.481 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:39.481 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:39.481 [9/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:39.481 [10/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:39.481 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:39.481 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:39.481 [13/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:39.742 [14/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:39.742 [15/268] Linking static target lib/librte_log.a 00:01:39.742 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:39.742 [17/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:39.742 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:39.742 [19/268] Linking static target lib/librte_kvargs.a 00:01:39.742 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:39.742 [21/268] Linking static target lib/librte_pci.a 00:01:39.742 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:39.742 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:39.742 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:40.001 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:40.001 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:40.001 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:40.001 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:40.001 [29/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:40.001 [30/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:40.001 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:40.001 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:40.001 [33/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:40.001 [34/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:40.001 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:40.001 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:40.001 [37/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:40.001 [38/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:40.001 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:40.001 [40/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:40.001 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:40.001 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:40.001 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:40.001 [44/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:40.001 [45/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:40.001 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:40.001 [47/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:40.001 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:40.001 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:40.001 [50/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:40.001 [51/268] Linking static target lib/librte_meter.a 00:01:40.001 [52/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:40.001 [53/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:40.001 [54/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:40.001 [55/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:40.001 [56/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:40.001 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:40.001 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:40.001 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:40.002 [60/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:40.002 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:40.002 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:40.002 [63/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:40.002 [64/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:40.002 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:40.002 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:40.002 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:40.002 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:40.002 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:40.002 [70/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:40.002 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:40.002 [72/268] Linking static target lib/librte_ring.a 00:01:40.262 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:40.262 [74/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:40.262 [75/268] Linking static target lib/librte_telemetry.a 00:01:40.262 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:40.262 [77/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:40.262 [78/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:40.262 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:40.262 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:40.262 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:40.262 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:40.262 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:40.262 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:40.262 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:40.262 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:40.262 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:40.262 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:40.262 [89/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:40.262 [90/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:40.262 [91/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:40.262 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:40.262 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:40.262 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:40.262 [95/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:40.262 [96/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:40.262 [97/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:40.262 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:40.262 [99/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:40.262 [100/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:40.262 [101/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:40.262 [102/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:40.262 [103/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:40.262 [104/268] Linking static target lib/librte_mempool.a 00:01:40.262 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:40.262 [106/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:40.262 [107/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.262 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:40.262 [109/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:40.262 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:40.262 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:40.262 [112/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:40.262 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:40.262 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:40.262 [115/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.262 [116/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:40.262 [117/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:40.262 [118/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:40.262 [119/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:40.262 [120/268] Linking static target lib/librte_net.a 00:01:40.262 [121/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:40.262 [122/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:40.262 [123/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:40.262 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:40.262 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:40.262 [126/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:40.262 [127/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:40.262 [128/268] Linking static target lib/librte_rcu.a 00:01:40.262 [129/268] Linking static target lib/librte_eal.a 00:01:40.262 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:40.262 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:40.262 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:40.262 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:40.262 [134/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.262 [135/268] Linking static target lib/librte_cmdline.a 00:01:40.521 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:40.521 [137/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.521 [138/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.521 [139/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:40.521 [140/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:40.521 [141/268] Linking static target lib/librte_timer.a 00:01:40.521 [142/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:40.521 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:40.521 [144/268] Linking static target lib/librte_mbuf.a 00:01:40.521 [145/268] Linking target lib/librte_log.so.24.1 00:01:40.521 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:40.521 [147/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:40.521 [148/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.521 [149/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:40.521 [150/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:40.521 [151/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:40.521 [152/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:40.521 [153/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:40.521 [154/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:40.521 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:40.521 [156/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:40.521 [157/268] Linking static target lib/librte_compressdev.a 00:01:40.521 [158/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:40.521 [159/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:40.521 [160/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.521 [161/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.521 [162/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:40.521 [163/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:40.521 [164/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:40.521 [165/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:40.521 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:40.521 [167/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:40.521 [168/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:40.521 [169/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:40.521 [170/268] Linking static target lib/librte_reorder.a 00:01:40.521 [171/268] Linking static target lib/librte_dmadev.a 00:01:40.780 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:40.780 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:40.780 [174/268] Linking target lib/librte_kvargs.so.24.1 00:01:40.780 [175/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:40.780 [176/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:40.780 [177/268] Linking target lib/librte_telemetry.so.24.1 00:01:40.780 [178/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:40.780 [179/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:40.780 [180/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:40.780 [181/268] Linking static target lib/librte_security.a 00:01:40.780 [182/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:40.780 [183/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:40.780 [184/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:40.780 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:40.780 [186/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:40.780 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:40.780 [188/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:40.780 [189/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:40.780 [190/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:40.780 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:40.780 [192/268] Linking static target drivers/librte_bus_vdev.a 00:01:40.780 [193/268] Linking static target lib/librte_power.a 00:01:40.780 [194/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:40.780 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:40.780 [196/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:40.780 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:40.780 [198/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:40.780 [199/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:40.780 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:40.780 [201/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:40.780 [202/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:40.780 [203/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:40.780 [204/268] Linking static target drivers/librte_mempool_ring.a 00:01:40.780 [205/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:40.780 [206/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.039 [207/268] Linking static target lib/librte_hash.a 00:01:41.039 [208/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.039 [209/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:41.039 [210/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:41.039 [211/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:41.039 [212/268] Linking static target drivers/librte_bus_pci.a 00:01:41.039 [213/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:41.039 [214/268] Linking static target lib/librte_cryptodev.a 00:01:41.039 [215/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.040 [216/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.299 [217/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.299 [218/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.299 [219/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.299 [220/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:41.299 [221/268] Linking static target lib/librte_ethdev.a 00:01:41.299 [222/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.557 [223/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:41.557 [224/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.557 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.815 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.815 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.751 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:42.751 [229/268] Linking static target lib/librte_vhost.a 00:01:43.009 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.386 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.653 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.221 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.479 [234/268] Linking target lib/librte_eal.so.24.1 00:01:50.479 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:50.479 [236/268] Linking target lib/librte_ring.so.24.1 00:01:50.479 [237/268] Linking target lib/librte_meter.so.24.1 00:01:50.479 [238/268] Linking target lib/librte_pci.so.24.1 00:01:50.479 [239/268] Linking target lib/librte_timer.so.24.1 00:01:50.479 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:50.479 [241/268] Linking target lib/librte_dmadev.so.24.1 00:01:50.738 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:50.738 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:50.738 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:50.738 [245/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:50.738 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:50.738 [247/268] Linking target lib/librte_mempool.so.24.1 00:01:50.738 [248/268] Linking target lib/librte_rcu.so.24.1 00:01:50.738 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:50.738 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:50.997 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:50.997 [252/268] Linking target lib/librte_mbuf.so.24.1 00:01:50.997 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:50.997 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:50.997 [255/268] Linking target lib/librte_reorder.so.24.1 00:01:50.997 [256/268] Linking target lib/librte_net.so.24.1 00:01:50.997 [257/268] Linking target lib/librte_compressdev.so.24.1 00:01:50.997 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:01:51.256 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:51.256 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:51.256 [261/268] Linking target lib/librte_security.so.24.1 00:01:51.256 [262/268] Linking target lib/librte_cmdline.so.24.1 00:01:51.256 [263/268] Linking target lib/librte_hash.so.24.1 00:01:51.256 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:51.514 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:51.514 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:51.514 [267/268] Linking target lib/librte_power.so.24.1 00:01:51.514 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:51.514 INFO: autodetecting backend as ninja 00:01:51.514 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:03.733 CC lib/ut_mock/mock.o 00:02:03.733 CC lib/ut/ut.o 00:02:03.733 CC lib/log/log.o 00:02:03.733 CC lib/log/log_flags.o 00:02:03.733 CC lib/log/log_deprecated.o 00:02:03.733 LIB libspdk_log.a 00:02:03.733 LIB libspdk_ut.a 00:02:03.733 LIB libspdk_ut_mock.a 00:02:03.733 SO libspdk_ut_mock.so.6.0 00:02:03.733 SO libspdk_ut.so.2.0 00:02:03.733 SO libspdk_log.so.7.1 00:02:03.733 SYMLINK libspdk_ut_mock.so 00:02:03.733 SYMLINK libspdk_ut.so 00:02:03.733 SYMLINK libspdk_log.so 00:02:03.733 CC lib/ioat/ioat.o 00:02:03.733 CC lib/util/base64.o 00:02:03.733 CC lib/util/bit_array.o 00:02:03.733 CC lib/dma/dma.o 00:02:03.733 CC lib/util/cpuset.o 00:02:03.733 CC lib/util/crc16.o 00:02:03.733 CC lib/util/crc32.o 00:02:03.733 CC lib/util/crc32c.o 00:02:03.733 CC lib/util/crc32_ieee.o 00:02:03.733 CC lib/util/crc64.o 00:02:03.733 CC lib/util/dif.o 00:02:03.733 CC lib/util/fd.o 00:02:03.733 CXX lib/trace_parser/trace.o 00:02:03.733 CC lib/util/fd_group.o 00:02:03.733 CC lib/util/file.o 00:02:03.733 CC lib/util/hexlify.o 00:02:03.733 CC lib/util/iov.o 00:02:03.733 CC lib/util/math.o 00:02:03.733 CC lib/util/net.o 00:02:03.733 CC lib/util/pipe.o 00:02:03.733 CC lib/util/strerror_tls.o 00:02:03.733 CC lib/util/string.o 00:02:03.733 CC lib/util/uuid.o 00:02:03.733 CC lib/util/xor.o 00:02:03.733 CC lib/util/zipf.o 00:02:03.733 CC lib/util/md5.o 00:02:03.733 CC lib/vfio_user/host/vfio_user_pci.o 00:02:03.733 CC lib/vfio_user/host/vfio_user.o 00:02:03.733 LIB libspdk_dma.a 00:02:03.733 SO libspdk_dma.so.5.0 00:02:03.733 LIB libspdk_ioat.a 00:02:03.733 SO libspdk_ioat.so.7.0 00:02:03.733 SYMLINK libspdk_dma.so 00:02:03.733 SYMLINK libspdk_ioat.so 00:02:03.733 LIB libspdk_vfio_user.a 00:02:03.733 SO libspdk_vfio_user.so.5.0 00:02:03.733 SYMLINK libspdk_vfio_user.so 00:02:03.733 LIB libspdk_util.a 00:02:03.733 SO libspdk_util.so.10.1 00:02:03.733 SYMLINK libspdk_util.so 00:02:03.733 LIB libspdk_trace_parser.a 00:02:03.733 SO libspdk_trace_parser.so.6.0 00:02:03.733 SYMLINK libspdk_trace_parser.so 00:02:03.733 CC lib/env_dpdk/env.o 00:02:03.733 CC lib/env_dpdk/memory.o 00:02:03.733 CC lib/env_dpdk/pci.o 00:02:03.733 CC lib/env_dpdk/init.o 00:02:03.733 CC lib/env_dpdk/threads.o 00:02:03.733 CC lib/env_dpdk/pci_virtio.o 00:02:03.733 CC lib/env_dpdk/pci_ioat.o 00:02:03.733 CC lib/env_dpdk/pci_vmd.o 00:02:03.733 CC lib/json/json_parse.o 00:02:03.733 CC lib/env_dpdk/pci_idxd.o 00:02:03.733 CC lib/env_dpdk/pci_event.o 00:02:03.733 CC lib/json/json_util.o 00:02:03.733 CC lib/env_dpdk/sigbus_handler.o 00:02:03.733 CC lib/json/json_write.o 00:02:03.733 CC lib/env_dpdk/pci_dpdk.o 00:02:03.733 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:03.733 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:03.733 CC lib/conf/conf.o 00:02:03.733 CC lib/rdma_utils/rdma_utils.o 00:02:03.733 CC lib/vmd/vmd.o 00:02:03.733 CC lib/vmd/led.o 00:02:03.733 CC lib/idxd/idxd.o 00:02:03.733 CC lib/idxd/idxd_user.o 00:02:03.733 CC lib/idxd/idxd_kernel.o 00:02:03.733 LIB libspdk_conf.a 00:02:03.733 LIB libspdk_rdma_utils.a 00:02:03.733 LIB libspdk_json.a 00:02:03.733 SO libspdk_conf.so.6.0 00:02:03.733 SO libspdk_rdma_utils.so.1.0 00:02:03.733 SO libspdk_json.so.6.0 00:02:03.733 SYMLINK libspdk_conf.so 00:02:03.733 SYMLINK libspdk_rdma_utils.so 00:02:03.733 SYMLINK libspdk_json.so 00:02:03.733 LIB libspdk_idxd.a 00:02:03.733 LIB libspdk_vmd.a 00:02:03.733 SO libspdk_idxd.so.12.1 00:02:03.733 SO libspdk_vmd.so.6.0 00:02:03.733 SYMLINK libspdk_idxd.so 00:02:03.733 SYMLINK libspdk_vmd.so 00:02:03.991 CC lib/rdma_provider/common.o 00:02:03.991 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:03.991 CC lib/jsonrpc/jsonrpc_server.o 00:02:03.991 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:03.991 CC lib/jsonrpc/jsonrpc_client.o 00:02:03.991 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:03.991 LIB libspdk_rdma_provider.a 00:02:03.991 SO libspdk_rdma_provider.so.7.0 00:02:03.991 LIB libspdk_jsonrpc.a 00:02:04.250 SYMLINK libspdk_rdma_provider.so 00:02:04.250 SO libspdk_jsonrpc.so.6.0 00:02:04.250 SYMLINK libspdk_jsonrpc.so 00:02:04.250 LIB libspdk_env_dpdk.a 00:02:04.250 SO libspdk_env_dpdk.so.15.1 00:02:04.509 SYMLINK libspdk_env_dpdk.so 00:02:04.509 CC lib/rpc/rpc.o 00:02:04.772 LIB libspdk_rpc.a 00:02:04.772 SO libspdk_rpc.so.6.0 00:02:04.772 SYMLINK libspdk_rpc.so 00:02:05.031 CC lib/trace/trace.o 00:02:05.031 CC lib/trace/trace_flags.o 00:02:05.031 CC lib/trace/trace_rpc.o 00:02:05.031 CC lib/notify/notify.o 00:02:05.031 CC lib/keyring/keyring.o 00:02:05.031 CC lib/notify/notify_rpc.o 00:02:05.031 CC lib/keyring/keyring_rpc.o 00:02:05.290 LIB libspdk_notify.a 00:02:05.290 SO libspdk_notify.so.6.0 00:02:05.290 LIB libspdk_keyring.a 00:02:05.290 LIB libspdk_trace.a 00:02:05.290 SYMLINK libspdk_notify.so 00:02:05.290 SO libspdk_keyring.so.2.0 00:02:05.290 SO libspdk_trace.so.11.0 00:02:05.290 SYMLINK libspdk_keyring.so 00:02:05.548 SYMLINK libspdk_trace.so 00:02:05.807 CC lib/sock/sock.o 00:02:05.807 CC lib/sock/sock_rpc.o 00:02:05.807 CC lib/thread/thread.o 00:02:05.807 CC lib/thread/iobuf.o 00:02:06.066 LIB libspdk_sock.a 00:02:06.066 SO libspdk_sock.so.10.0 00:02:06.066 SYMLINK libspdk_sock.so 00:02:06.634 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:06.634 CC lib/nvme/nvme_ctrlr.o 00:02:06.634 CC lib/nvme/nvme_fabric.o 00:02:06.634 CC lib/nvme/nvme_ns_cmd.o 00:02:06.634 CC lib/nvme/nvme_ns.o 00:02:06.634 CC lib/nvme/nvme_pcie_common.o 00:02:06.634 CC lib/nvme/nvme_pcie.o 00:02:06.634 CC lib/nvme/nvme_qpair.o 00:02:06.634 CC lib/nvme/nvme.o 00:02:06.634 CC lib/nvme/nvme_quirks.o 00:02:06.634 CC lib/nvme/nvme_transport.o 00:02:06.634 CC lib/nvme/nvme_discovery.o 00:02:06.634 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:06.634 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:06.634 CC lib/nvme/nvme_opal.o 00:02:06.634 CC lib/nvme/nvme_tcp.o 00:02:06.634 CC lib/nvme/nvme_io_msg.o 00:02:06.634 CC lib/nvme/nvme_poll_group.o 00:02:06.634 CC lib/nvme/nvme_zns.o 00:02:06.634 CC lib/nvme/nvme_stubs.o 00:02:06.634 CC lib/nvme/nvme_auth.o 00:02:06.634 CC lib/nvme/nvme_cuse.o 00:02:06.634 CC lib/nvme/nvme_vfio_user.o 00:02:06.634 CC lib/nvme/nvme_rdma.o 00:02:06.893 LIB libspdk_thread.a 00:02:06.893 SO libspdk_thread.so.11.0 00:02:06.893 SYMLINK libspdk_thread.so 00:02:07.151 CC lib/init/json_config.o 00:02:07.151 CC lib/init/subsystem.o 00:02:07.151 CC lib/init/subsystem_rpc.o 00:02:07.151 CC lib/init/rpc.o 00:02:07.151 CC lib/blob/blobstore.o 00:02:07.151 CC lib/blob/request.o 00:02:07.151 CC lib/accel/accel.o 00:02:07.151 CC lib/blob/zeroes.o 00:02:07.151 CC lib/accel/accel_rpc.o 00:02:07.151 CC lib/blob/blob_bs_dev.o 00:02:07.151 CC lib/accel/accel_sw.o 00:02:07.151 CC lib/virtio/virtio.o 00:02:07.151 CC lib/virtio/virtio_pci.o 00:02:07.151 CC lib/virtio/virtio_vfio_user.o 00:02:07.151 CC lib/virtio/virtio_vhost_user.o 00:02:07.151 CC lib/vfu_tgt/tgt_endpoint.o 00:02:07.151 CC lib/vfu_tgt/tgt_rpc.o 00:02:07.151 CC lib/fsdev/fsdev.o 00:02:07.151 CC lib/fsdev/fsdev_io.o 00:02:07.151 CC lib/fsdev/fsdev_rpc.o 00:02:07.409 LIB libspdk_init.a 00:02:07.409 SO libspdk_init.so.6.0 00:02:07.409 SYMLINK libspdk_init.so 00:02:07.409 LIB libspdk_vfu_tgt.a 00:02:07.409 LIB libspdk_virtio.a 00:02:07.666 SO libspdk_vfu_tgt.so.3.0 00:02:07.666 SO libspdk_virtio.so.7.0 00:02:07.666 SYMLINK libspdk_virtio.so 00:02:07.666 SYMLINK libspdk_vfu_tgt.so 00:02:07.666 LIB libspdk_fsdev.a 00:02:07.924 CC lib/event/app.o 00:02:07.924 CC lib/event/app_rpc.o 00:02:07.924 CC lib/event/reactor.o 00:02:07.924 SO libspdk_fsdev.so.2.0 00:02:07.925 CC lib/event/log_rpc.o 00:02:07.925 CC lib/event/scheduler_static.o 00:02:07.925 SYMLINK libspdk_fsdev.so 00:02:07.925 LIB libspdk_accel.a 00:02:08.183 SO libspdk_accel.so.16.0 00:02:08.183 LIB libspdk_nvme.a 00:02:08.183 SYMLINK libspdk_accel.so 00:02:08.183 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:08.183 LIB libspdk_event.a 00:02:08.183 SO libspdk_event.so.14.0 00:02:08.183 SO libspdk_nvme.so.15.0 00:02:08.183 SYMLINK libspdk_event.so 00:02:08.442 SYMLINK libspdk_nvme.so 00:02:08.442 CC lib/bdev/bdev.o 00:02:08.442 CC lib/bdev/bdev_rpc.o 00:02:08.442 CC lib/bdev/bdev_zone.o 00:02:08.442 CC lib/bdev/part.o 00:02:08.442 CC lib/bdev/scsi_nvme.o 00:02:08.702 LIB libspdk_fuse_dispatcher.a 00:02:08.702 SO libspdk_fuse_dispatcher.so.1.0 00:02:08.702 SYMLINK libspdk_fuse_dispatcher.so 00:02:09.272 LIB libspdk_blob.a 00:02:09.531 SO libspdk_blob.so.11.0 00:02:09.531 SYMLINK libspdk_blob.so 00:02:09.791 CC lib/blobfs/blobfs.o 00:02:09.791 CC lib/blobfs/tree.o 00:02:09.791 CC lib/lvol/lvol.o 00:02:10.359 LIB libspdk_bdev.a 00:02:10.359 SO libspdk_bdev.so.17.0 00:02:10.359 LIB libspdk_blobfs.a 00:02:10.359 SO libspdk_blobfs.so.10.0 00:02:10.359 SYMLINK libspdk_bdev.so 00:02:10.359 LIB libspdk_lvol.a 00:02:10.619 SYMLINK libspdk_blobfs.so 00:02:10.619 SO libspdk_lvol.so.10.0 00:02:10.619 SYMLINK libspdk_lvol.so 00:02:10.879 CC lib/nbd/nbd.o 00:02:10.879 CC lib/ublk/ublk.o 00:02:10.879 CC lib/nvmf/ctrlr.o 00:02:10.879 CC lib/nbd/nbd_rpc.o 00:02:10.879 CC lib/ublk/ublk_rpc.o 00:02:10.879 CC lib/nvmf/ctrlr_discovery.o 00:02:10.879 CC lib/nvmf/ctrlr_bdev.o 00:02:10.879 CC lib/nvmf/subsystem.o 00:02:10.879 CC lib/nvmf/nvmf.o 00:02:10.879 CC lib/nvmf/nvmf_rpc.o 00:02:10.879 CC lib/nvmf/transport.o 00:02:10.879 CC lib/scsi/dev.o 00:02:10.879 CC lib/scsi/lun.o 00:02:10.879 CC lib/nvmf/tcp.o 00:02:10.879 CC lib/scsi/port.o 00:02:10.879 CC lib/nvmf/stubs.o 00:02:10.879 CC lib/ftl/ftl_core.o 00:02:10.879 CC lib/scsi/scsi.o 00:02:10.879 CC lib/nvmf/mdns_server.o 00:02:10.879 CC lib/nvmf/vfio_user.o 00:02:10.879 CC lib/scsi/scsi_bdev.o 00:02:10.879 CC lib/ftl/ftl_init.o 00:02:10.879 CC lib/ftl/ftl_layout.o 00:02:10.879 CC lib/nvmf/rdma.o 00:02:10.879 CC lib/scsi/scsi_pr.o 00:02:10.879 CC lib/scsi/scsi_rpc.o 00:02:10.879 CC lib/ftl/ftl_debug.o 00:02:10.879 CC lib/nvmf/auth.o 00:02:10.879 CC lib/ftl/ftl_io.o 00:02:10.879 CC lib/scsi/task.o 00:02:10.879 CC lib/ftl/ftl_sb.o 00:02:10.879 CC lib/ftl/ftl_l2p.o 00:02:10.879 CC lib/ftl/ftl_l2p_flat.o 00:02:10.879 CC lib/ftl/ftl_nv_cache.o 00:02:10.879 CC lib/ftl/ftl_band.o 00:02:10.879 CC lib/ftl/ftl_band_ops.o 00:02:10.879 CC lib/ftl/ftl_writer.o 00:02:10.879 CC lib/ftl/ftl_rq.o 00:02:10.879 CC lib/ftl/ftl_reloc.o 00:02:10.879 CC lib/ftl/ftl_l2p_cache.o 00:02:10.879 CC lib/ftl/ftl_p2l.o 00:02:10.879 CC lib/ftl/ftl_p2l_log.o 00:02:10.879 CC lib/ftl/mngt/ftl_mngt.o 00:02:10.879 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:10.879 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:10.879 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:10.879 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:10.879 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:10.879 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:10.879 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:10.879 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:10.879 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:10.879 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:10.879 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:10.879 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:10.879 CC lib/ftl/utils/ftl_md.o 00:02:10.879 CC lib/ftl/utils/ftl_conf.o 00:02:10.879 CC lib/ftl/utils/ftl_mempool.o 00:02:10.879 CC lib/ftl/utils/ftl_bitmap.o 00:02:10.879 CC lib/ftl/utils/ftl_property.o 00:02:10.879 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:10.879 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:10.879 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:10.879 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:10.879 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:10.879 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:10.879 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:10.879 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:10.879 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:10.879 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:10.879 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:10.879 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:10.879 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:10.879 CC lib/ftl/base/ftl_base_dev.o 00:02:10.879 CC lib/ftl/ftl_trace.o 00:02:10.879 CC lib/ftl/base/ftl_base_bdev.o 00:02:11.138 LIB libspdk_nbd.a 00:02:11.397 SO libspdk_nbd.so.7.0 00:02:11.397 SYMLINK libspdk_nbd.so 00:02:11.397 LIB libspdk_scsi.a 00:02:11.397 SO libspdk_scsi.so.9.0 00:02:11.397 SYMLINK libspdk_scsi.so 00:02:11.397 LIB libspdk_ublk.a 00:02:11.657 SO libspdk_ublk.so.3.0 00:02:11.657 SYMLINK libspdk_ublk.so 00:02:11.657 CC lib/iscsi/conn.o 00:02:11.657 CC lib/iscsi/init_grp.o 00:02:11.657 CC lib/iscsi/iscsi.o 00:02:11.657 CC lib/iscsi/param.o 00:02:11.657 CC lib/iscsi/portal_grp.o 00:02:11.657 CC lib/iscsi/tgt_node.o 00:02:11.657 CC lib/iscsi/iscsi_subsystem.o 00:02:11.657 CC lib/iscsi/iscsi_rpc.o 00:02:11.657 CC lib/iscsi/task.o 00:02:11.657 CC lib/vhost/vhost.o 00:02:11.657 CC lib/vhost/vhost_rpc.o 00:02:11.657 CC lib/vhost/vhost_scsi.o 00:02:11.657 CC lib/vhost/vhost_blk.o 00:02:11.657 CC lib/vhost/rte_vhost_user.o 00:02:11.657 LIB libspdk_ftl.a 00:02:11.914 SO libspdk_ftl.so.9.0 00:02:12.172 SYMLINK libspdk_ftl.so 00:02:12.430 LIB libspdk_nvmf.a 00:02:12.430 LIB libspdk_vhost.a 00:02:12.689 SO libspdk_nvmf.so.20.0 00:02:12.689 SO libspdk_vhost.so.8.0 00:02:12.689 SYMLINK libspdk_vhost.so 00:02:12.689 LIB libspdk_iscsi.a 00:02:12.689 SYMLINK libspdk_nvmf.so 00:02:12.689 SO libspdk_iscsi.so.8.0 00:02:12.949 SYMLINK libspdk_iscsi.so 00:02:13.518 CC module/env_dpdk/env_dpdk_rpc.o 00:02:13.518 CC module/vfu_device/vfu_virtio_blk.o 00:02:13.518 CC module/vfu_device/vfu_virtio.o 00:02:13.518 CC module/vfu_device/vfu_virtio_scsi.o 00:02:13.518 CC module/vfu_device/vfu_virtio_rpc.o 00:02:13.518 CC module/vfu_device/vfu_virtio_fs.o 00:02:13.518 CC module/sock/posix/posix.o 00:02:13.518 LIB libspdk_env_dpdk_rpc.a 00:02:13.518 CC module/blob/bdev/blob_bdev.o 00:02:13.518 CC module/fsdev/aio/fsdev_aio.o 00:02:13.518 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:13.518 CC module/fsdev/aio/linux_aio_mgr.o 00:02:13.518 CC module/keyring/linux/keyring.o 00:02:13.518 CC module/accel/dsa/accel_dsa.o 00:02:13.518 CC module/keyring/file/keyring.o 00:02:13.518 CC module/accel/ioat/accel_ioat.o 00:02:13.518 CC module/accel/ioat/accel_ioat_rpc.o 00:02:13.518 CC module/keyring/file/keyring_rpc.o 00:02:13.518 CC module/keyring/linux/keyring_rpc.o 00:02:13.518 CC module/accel/dsa/accel_dsa_rpc.o 00:02:13.518 CC module/accel/iaa/accel_iaa.o 00:02:13.518 CC module/accel/iaa/accel_iaa_rpc.o 00:02:13.518 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:13.518 CC module/scheduler/gscheduler/gscheduler.o 00:02:13.518 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:13.518 CC module/accel/error/accel_error_rpc.o 00:02:13.518 CC module/accel/error/accel_error.o 00:02:13.518 SO libspdk_env_dpdk_rpc.so.6.0 00:02:13.777 SYMLINK libspdk_env_dpdk_rpc.so 00:02:13.777 LIB libspdk_keyring_file.a 00:02:13.777 LIB libspdk_keyring_linux.a 00:02:13.777 LIB libspdk_scheduler_gscheduler.a 00:02:13.777 LIB libspdk_scheduler_dpdk_governor.a 00:02:13.777 SO libspdk_keyring_linux.so.1.0 00:02:13.777 LIB libspdk_scheduler_dynamic.a 00:02:13.777 SO libspdk_keyring_file.so.2.0 00:02:13.777 SO libspdk_scheduler_gscheduler.so.4.0 00:02:13.777 LIB libspdk_accel_ioat.a 00:02:13.777 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:13.777 SO libspdk_scheduler_dynamic.so.4.0 00:02:13.777 LIB libspdk_accel_error.a 00:02:13.777 LIB libspdk_accel_iaa.a 00:02:13.777 SO libspdk_accel_ioat.so.6.0 00:02:13.777 SYMLINK libspdk_keyring_file.so 00:02:13.777 LIB libspdk_blob_bdev.a 00:02:13.777 LIB libspdk_accel_dsa.a 00:02:13.777 SYMLINK libspdk_keyring_linux.so 00:02:13.777 SYMLINK libspdk_scheduler_gscheduler.so 00:02:13.777 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:13.777 SO libspdk_accel_error.so.2.0 00:02:13.777 SO libspdk_accel_iaa.so.3.0 00:02:13.777 SO libspdk_blob_bdev.so.11.0 00:02:13.777 SYMLINK libspdk_scheduler_dynamic.so 00:02:13.777 SO libspdk_accel_dsa.so.5.0 00:02:13.777 SYMLINK libspdk_accel_ioat.so 00:02:13.777 SYMLINK libspdk_blob_bdev.so 00:02:13.777 SYMLINK libspdk_accel_iaa.so 00:02:14.038 SYMLINK libspdk_accel_error.so 00:02:14.038 SYMLINK libspdk_accel_dsa.so 00:02:14.038 LIB libspdk_vfu_device.a 00:02:14.038 SO libspdk_vfu_device.so.3.0 00:02:14.038 SYMLINK libspdk_vfu_device.so 00:02:14.038 LIB libspdk_fsdev_aio.a 00:02:14.038 SO libspdk_fsdev_aio.so.1.0 00:02:14.038 LIB libspdk_sock_posix.a 00:02:14.297 SO libspdk_sock_posix.so.6.0 00:02:14.297 SYMLINK libspdk_fsdev_aio.so 00:02:14.297 SYMLINK libspdk_sock_posix.so 00:02:14.297 CC module/bdev/gpt/gpt.o 00:02:14.297 CC module/bdev/gpt/vbdev_gpt.o 00:02:14.297 CC module/bdev/lvol/vbdev_lvol.o 00:02:14.297 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:14.297 CC module/bdev/null/bdev_null.o 00:02:14.297 CC module/bdev/error/vbdev_error.o 00:02:14.297 CC module/bdev/error/vbdev_error_rpc.o 00:02:14.297 CC module/bdev/null/bdev_null_rpc.o 00:02:14.297 CC module/bdev/delay/vbdev_delay.o 00:02:14.297 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:14.297 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:14.297 CC module/bdev/raid/bdev_raid.o 00:02:14.297 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:14.297 CC module/bdev/raid/bdev_raid_rpc.o 00:02:14.297 CC module/blobfs/bdev/blobfs_bdev.o 00:02:14.297 CC module/bdev/malloc/bdev_malloc.o 00:02:14.297 CC module/bdev/raid/bdev_raid_sb.o 00:02:14.297 CC module/bdev/raid/raid0.o 00:02:14.297 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:14.297 CC module/bdev/raid/raid1.o 00:02:14.297 CC module/bdev/raid/concat.o 00:02:14.297 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:14.297 CC module/bdev/nvme/bdev_nvme.o 00:02:14.297 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:14.297 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:14.297 CC module/bdev/ftl/bdev_ftl.o 00:02:14.297 CC module/bdev/nvme/nvme_rpc.o 00:02:14.297 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:14.297 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:14.297 CC module/bdev/nvme/bdev_mdns_client.o 00:02:14.297 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:14.297 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:14.297 CC module/bdev/aio/bdev_aio.o 00:02:14.297 CC module/bdev/nvme/vbdev_opal.o 00:02:14.297 CC module/bdev/aio/bdev_aio_rpc.o 00:02:14.297 CC module/bdev/passthru/vbdev_passthru.o 00:02:14.297 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:14.297 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:14.297 CC module/bdev/split/vbdev_split.o 00:02:14.297 CC module/bdev/split/vbdev_split_rpc.o 00:02:14.297 CC module/bdev/iscsi/bdev_iscsi.o 00:02:14.297 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:14.555 LIB libspdk_blobfs_bdev.a 00:02:14.555 LIB libspdk_bdev_error.a 00:02:14.555 SO libspdk_bdev_error.so.6.0 00:02:14.555 SO libspdk_blobfs_bdev.so.6.0 00:02:14.555 LIB libspdk_bdev_split.a 00:02:14.555 LIB libspdk_bdev_null.a 00:02:14.555 LIB libspdk_bdev_gpt.a 00:02:14.815 LIB libspdk_bdev_ftl.a 00:02:14.815 LIB libspdk_bdev_passthru.a 00:02:14.815 SO libspdk_bdev_null.so.6.0 00:02:14.815 SO libspdk_bdev_split.so.6.0 00:02:14.815 SYMLINK libspdk_bdev_error.so 00:02:14.815 SO libspdk_bdev_gpt.so.6.0 00:02:14.815 SYMLINK libspdk_blobfs_bdev.so 00:02:14.815 SO libspdk_bdev_passthru.so.6.0 00:02:14.815 SO libspdk_bdev_ftl.so.6.0 00:02:14.815 LIB libspdk_bdev_delay.a 00:02:14.815 LIB libspdk_bdev_zone_block.a 00:02:14.815 LIB libspdk_bdev_aio.a 00:02:14.815 SYMLINK libspdk_bdev_null.so 00:02:14.815 LIB libspdk_bdev_iscsi.a 00:02:14.815 SYMLINK libspdk_bdev_split.so 00:02:14.815 SO libspdk_bdev_delay.so.6.0 00:02:14.815 SYMLINK libspdk_bdev_gpt.so 00:02:14.815 SO libspdk_bdev_aio.so.6.0 00:02:14.815 SO libspdk_bdev_zone_block.so.6.0 00:02:14.815 LIB libspdk_bdev_malloc.a 00:02:14.815 SYMLINK libspdk_bdev_passthru.so 00:02:14.815 SO libspdk_bdev_iscsi.so.6.0 00:02:14.815 SYMLINK libspdk_bdev_ftl.so 00:02:14.815 SO libspdk_bdev_malloc.so.6.0 00:02:14.815 SYMLINK libspdk_bdev_aio.so 00:02:14.815 SYMLINK libspdk_bdev_delay.so 00:02:14.815 SYMLINK libspdk_bdev_zone_block.so 00:02:14.815 SYMLINK libspdk_bdev_iscsi.so 00:02:14.815 LIB libspdk_bdev_lvol.a 00:02:14.815 LIB libspdk_bdev_virtio.a 00:02:14.815 SYMLINK libspdk_bdev_malloc.so 00:02:14.815 SO libspdk_bdev_lvol.so.6.0 00:02:14.815 SO libspdk_bdev_virtio.so.6.0 00:02:15.074 SYMLINK libspdk_bdev_lvol.so 00:02:15.074 SYMLINK libspdk_bdev_virtio.so 00:02:15.332 LIB libspdk_bdev_raid.a 00:02:15.332 SO libspdk_bdev_raid.so.6.0 00:02:15.332 SYMLINK libspdk_bdev_raid.so 00:02:16.270 LIB libspdk_bdev_nvme.a 00:02:16.270 SO libspdk_bdev_nvme.so.7.1 00:02:16.270 SYMLINK libspdk_bdev_nvme.so 00:02:17.210 CC module/event/subsystems/iobuf/iobuf.o 00:02:17.210 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:17.210 CC module/event/subsystems/sock/sock.o 00:02:17.210 CC module/event/subsystems/fsdev/fsdev.o 00:02:17.210 CC module/event/subsystems/scheduler/scheduler.o 00:02:17.210 CC module/event/subsystems/vmd/vmd.o 00:02:17.210 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:17.210 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:17.210 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:17.210 CC module/event/subsystems/keyring/keyring.o 00:02:17.210 LIB libspdk_event_vmd.a 00:02:17.210 LIB libspdk_event_vfu_tgt.a 00:02:17.210 LIB libspdk_event_iobuf.a 00:02:17.210 LIB libspdk_event_keyring.a 00:02:17.210 LIB libspdk_event_fsdev.a 00:02:17.210 LIB libspdk_event_sock.a 00:02:17.210 LIB libspdk_event_vhost_blk.a 00:02:17.210 LIB libspdk_event_scheduler.a 00:02:17.210 SO libspdk_event_vfu_tgt.so.3.0 00:02:17.210 SO libspdk_event_iobuf.so.3.0 00:02:17.210 SO libspdk_event_vmd.so.6.0 00:02:17.210 SO libspdk_event_fsdev.so.1.0 00:02:17.210 SO libspdk_event_keyring.so.1.0 00:02:17.210 SO libspdk_event_vhost_blk.so.3.0 00:02:17.210 SO libspdk_event_sock.so.5.0 00:02:17.210 SO libspdk_event_scheduler.so.4.0 00:02:17.210 SYMLINK libspdk_event_vfu_tgt.so 00:02:17.210 SYMLINK libspdk_event_iobuf.so 00:02:17.210 SYMLINK libspdk_event_fsdev.so 00:02:17.210 SYMLINK libspdk_event_vmd.so 00:02:17.210 SYMLINK libspdk_event_keyring.so 00:02:17.210 SYMLINK libspdk_event_vhost_blk.so 00:02:17.210 SYMLINK libspdk_event_scheduler.so 00:02:17.210 SYMLINK libspdk_event_sock.so 00:02:17.469 CC module/event/subsystems/accel/accel.o 00:02:17.729 LIB libspdk_event_accel.a 00:02:17.729 SO libspdk_event_accel.so.6.0 00:02:17.729 SYMLINK libspdk_event_accel.so 00:02:17.988 CC module/event/subsystems/bdev/bdev.o 00:02:18.246 LIB libspdk_event_bdev.a 00:02:18.246 SO libspdk_event_bdev.so.6.0 00:02:18.246 SYMLINK libspdk_event_bdev.so 00:02:18.813 CC module/event/subsystems/nbd/nbd.o 00:02:18.813 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:18.813 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:18.813 CC module/event/subsystems/scsi/scsi.o 00:02:18.813 CC module/event/subsystems/ublk/ublk.o 00:02:18.813 LIB libspdk_event_scsi.a 00:02:18.813 LIB libspdk_event_nbd.a 00:02:18.813 LIB libspdk_event_ublk.a 00:02:18.813 SO libspdk_event_scsi.so.6.0 00:02:18.813 SO libspdk_event_nbd.so.6.0 00:02:18.813 SO libspdk_event_ublk.so.3.0 00:02:18.813 LIB libspdk_event_nvmf.a 00:02:18.813 SYMLINK libspdk_event_scsi.so 00:02:18.813 SYMLINK libspdk_event_nbd.so 00:02:18.813 SO libspdk_event_nvmf.so.6.0 00:02:18.813 SYMLINK libspdk_event_ublk.so 00:02:19.074 SYMLINK libspdk_event_nvmf.so 00:02:19.074 CC module/event/subsystems/iscsi/iscsi.o 00:02:19.074 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:19.333 LIB libspdk_event_vhost_scsi.a 00:02:19.333 LIB libspdk_event_iscsi.a 00:02:19.333 SO libspdk_event_vhost_scsi.so.3.0 00:02:19.333 SO libspdk_event_iscsi.so.6.0 00:02:19.333 SYMLINK libspdk_event_vhost_scsi.so 00:02:19.333 SYMLINK libspdk_event_iscsi.so 00:02:19.592 SO libspdk.so.6.0 00:02:19.592 SYMLINK libspdk.so 00:02:19.851 CC app/spdk_nvme_perf/perf.o 00:02:19.851 CC test/rpc_client/rpc_client_test.o 00:02:19.851 CC app/spdk_nvme_identify/identify.o 00:02:19.851 CC app/spdk_top/spdk_top.o 00:02:19.851 CC app/trace_record/trace_record.o 00:02:19.851 CXX app/trace/trace.o 00:02:20.116 CC app/spdk_nvme_discover/discovery_aer.o 00:02:20.116 CC app/spdk_lspci/spdk_lspci.o 00:02:20.116 TEST_HEADER include/spdk/accel.h 00:02:20.116 TEST_HEADER include/spdk/accel_module.h 00:02:20.116 TEST_HEADER include/spdk/assert.h 00:02:20.116 TEST_HEADER include/spdk/barrier.h 00:02:20.116 TEST_HEADER include/spdk/bdev.h 00:02:20.116 TEST_HEADER include/spdk/bdev_module.h 00:02:20.116 TEST_HEADER include/spdk/base64.h 00:02:20.116 TEST_HEADER include/spdk/bdev_zone.h 00:02:20.116 TEST_HEADER include/spdk/bit_array.h 00:02:20.116 TEST_HEADER include/spdk/bit_pool.h 00:02:20.116 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:20.116 TEST_HEADER include/spdk/blob_bdev.h 00:02:20.116 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:20.116 TEST_HEADER include/spdk/blobfs.h 00:02:20.116 TEST_HEADER include/spdk/conf.h 00:02:20.116 TEST_HEADER include/spdk/blob.h 00:02:20.116 TEST_HEADER include/spdk/config.h 00:02:20.116 TEST_HEADER include/spdk/cpuset.h 00:02:20.116 TEST_HEADER include/spdk/crc16.h 00:02:20.116 TEST_HEADER include/spdk/crc32.h 00:02:20.116 TEST_HEADER include/spdk/crc64.h 00:02:20.116 TEST_HEADER include/spdk/dma.h 00:02:20.116 TEST_HEADER include/spdk/dif.h 00:02:20.116 TEST_HEADER include/spdk/endian.h 00:02:20.116 CC app/spdk_dd/spdk_dd.o 00:02:20.116 TEST_HEADER include/spdk/env.h 00:02:20.116 TEST_HEADER include/spdk/env_dpdk.h 00:02:20.116 TEST_HEADER include/spdk/event.h 00:02:20.116 TEST_HEADER include/spdk/fd_group.h 00:02:20.116 TEST_HEADER include/spdk/fd.h 00:02:20.116 TEST_HEADER include/spdk/fsdev.h 00:02:20.116 TEST_HEADER include/spdk/file.h 00:02:20.116 TEST_HEADER include/spdk/ftl.h 00:02:20.116 TEST_HEADER include/spdk/fsdev_module.h 00:02:20.116 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:20.116 TEST_HEADER include/spdk/gpt_spec.h 00:02:20.117 TEST_HEADER include/spdk/hexlify.h 00:02:20.117 CC app/nvmf_tgt/nvmf_main.o 00:02:20.117 TEST_HEADER include/spdk/histogram_data.h 00:02:20.117 TEST_HEADER include/spdk/idxd_spec.h 00:02:20.117 TEST_HEADER include/spdk/idxd.h 00:02:20.117 TEST_HEADER include/spdk/init.h 00:02:20.117 TEST_HEADER include/spdk/ioat_spec.h 00:02:20.117 TEST_HEADER include/spdk/iscsi_spec.h 00:02:20.117 TEST_HEADER include/spdk/ioat.h 00:02:20.117 TEST_HEADER include/spdk/json.h 00:02:20.117 TEST_HEADER include/spdk/jsonrpc.h 00:02:20.117 CC app/iscsi_tgt/iscsi_tgt.o 00:02:20.117 TEST_HEADER include/spdk/keyring.h 00:02:20.117 TEST_HEADER include/spdk/log.h 00:02:20.117 TEST_HEADER include/spdk/keyring_module.h 00:02:20.117 TEST_HEADER include/spdk/likely.h 00:02:20.117 TEST_HEADER include/spdk/md5.h 00:02:20.117 TEST_HEADER include/spdk/memory.h 00:02:20.117 TEST_HEADER include/spdk/lvol.h 00:02:20.117 TEST_HEADER include/spdk/mmio.h 00:02:20.117 TEST_HEADER include/spdk/nbd.h 00:02:20.117 TEST_HEADER include/spdk/net.h 00:02:20.117 TEST_HEADER include/spdk/notify.h 00:02:20.117 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:20.117 TEST_HEADER include/spdk/nvme.h 00:02:20.117 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:20.117 TEST_HEADER include/spdk/nvme_spec.h 00:02:20.117 TEST_HEADER include/spdk/nvme_intel.h 00:02:20.117 TEST_HEADER include/spdk/nvme_zns.h 00:02:20.117 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:20.117 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:20.117 TEST_HEADER include/spdk/nvmf.h 00:02:20.117 TEST_HEADER include/spdk/nvmf_spec.h 00:02:20.117 TEST_HEADER include/spdk/nvmf_transport.h 00:02:20.117 TEST_HEADER include/spdk/opal.h 00:02:20.117 TEST_HEADER include/spdk/opal_spec.h 00:02:20.117 TEST_HEADER include/spdk/pci_ids.h 00:02:20.117 TEST_HEADER include/spdk/pipe.h 00:02:20.117 TEST_HEADER include/spdk/reduce.h 00:02:20.117 TEST_HEADER include/spdk/rpc.h 00:02:20.117 TEST_HEADER include/spdk/queue.h 00:02:20.117 TEST_HEADER include/spdk/scheduler.h 00:02:20.117 TEST_HEADER include/spdk/scsi.h 00:02:20.117 TEST_HEADER include/spdk/sock.h 00:02:20.117 TEST_HEADER include/spdk/stdinc.h 00:02:20.117 TEST_HEADER include/spdk/scsi_spec.h 00:02:20.117 TEST_HEADER include/spdk/thread.h 00:02:20.117 TEST_HEADER include/spdk/trace_parser.h 00:02:20.117 TEST_HEADER include/spdk/trace.h 00:02:20.117 TEST_HEADER include/spdk/tree.h 00:02:20.117 TEST_HEADER include/spdk/string.h 00:02:20.117 TEST_HEADER include/spdk/ublk.h 00:02:20.117 TEST_HEADER include/spdk/uuid.h 00:02:20.117 TEST_HEADER include/spdk/util.h 00:02:20.117 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:20.117 TEST_HEADER include/spdk/version.h 00:02:20.117 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:20.117 CC app/spdk_tgt/spdk_tgt.o 00:02:20.117 TEST_HEADER include/spdk/xor.h 00:02:20.117 TEST_HEADER include/spdk/vhost.h 00:02:20.117 TEST_HEADER include/spdk/vmd.h 00:02:20.117 TEST_HEADER include/spdk/zipf.h 00:02:20.117 CXX test/cpp_headers/accel.o 00:02:20.117 CXX test/cpp_headers/accel_module.o 00:02:20.117 CXX test/cpp_headers/assert.o 00:02:20.117 CXX test/cpp_headers/base64.o 00:02:20.117 CXX test/cpp_headers/barrier.o 00:02:20.117 CXX test/cpp_headers/bdev_module.o 00:02:20.117 CXX test/cpp_headers/bdev_zone.o 00:02:20.117 CXX test/cpp_headers/bdev.o 00:02:20.117 CXX test/cpp_headers/bit_array.o 00:02:20.117 CXX test/cpp_headers/blob_bdev.o 00:02:20.117 CXX test/cpp_headers/bit_pool.o 00:02:20.117 CXX test/cpp_headers/blobfs_bdev.o 00:02:20.117 CXX test/cpp_headers/blobfs.o 00:02:20.117 CXX test/cpp_headers/config.o 00:02:20.117 CXX test/cpp_headers/blob.o 00:02:20.117 CXX test/cpp_headers/conf.o 00:02:20.117 CXX test/cpp_headers/crc16.o 00:02:20.117 CXX test/cpp_headers/cpuset.o 00:02:20.117 CXX test/cpp_headers/crc32.o 00:02:20.117 CXX test/cpp_headers/crc64.o 00:02:20.117 CXX test/cpp_headers/dif.o 00:02:20.117 CXX test/cpp_headers/dma.o 00:02:20.117 CXX test/cpp_headers/endian.o 00:02:20.117 CXX test/cpp_headers/env_dpdk.o 00:02:20.117 CXX test/cpp_headers/event.o 00:02:20.117 CXX test/cpp_headers/env.o 00:02:20.117 CXX test/cpp_headers/fd_group.o 00:02:20.117 CXX test/cpp_headers/fd.o 00:02:20.117 CXX test/cpp_headers/file.o 00:02:20.117 CXX test/cpp_headers/fsdev.o 00:02:20.117 CXX test/cpp_headers/fsdev_module.o 00:02:20.117 CXX test/cpp_headers/ftl.o 00:02:20.117 CXX test/cpp_headers/fuse_dispatcher.o 00:02:20.117 CXX test/cpp_headers/gpt_spec.o 00:02:20.117 CXX test/cpp_headers/histogram_data.o 00:02:20.117 CXX test/cpp_headers/hexlify.o 00:02:20.117 CXX test/cpp_headers/idxd.o 00:02:20.117 CXX test/cpp_headers/idxd_spec.o 00:02:20.117 CXX test/cpp_headers/init.o 00:02:20.117 CXX test/cpp_headers/ioat.o 00:02:20.117 CXX test/cpp_headers/ioat_spec.o 00:02:20.117 CXX test/cpp_headers/json.o 00:02:20.117 CXX test/cpp_headers/iscsi_spec.o 00:02:20.117 CXX test/cpp_headers/keyring.o 00:02:20.117 CXX test/cpp_headers/jsonrpc.o 00:02:20.117 CXX test/cpp_headers/likely.o 00:02:20.117 CXX test/cpp_headers/keyring_module.o 00:02:20.117 CXX test/cpp_headers/lvol.o 00:02:20.117 CXX test/cpp_headers/log.o 00:02:20.117 CXX test/cpp_headers/md5.o 00:02:20.117 CXX test/cpp_headers/nbd.o 00:02:20.117 CXX test/cpp_headers/memory.o 00:02:20.117 CXX test/cpp_headers/mmio.o 00:02:20.117 CXX test/cpp_headers/net.o 00:02:20.117 CXX test/cpp_headers/notify.o 00:02:20.117 CXX test/cpp_headers/nvme.o 00:02:20.117 CXX test/cpp_headers/nvme_ocssd.o 00:02:20.117 CXX test/cpp_headers/nvme_intel.o 00:02:20.117 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:20.117 CXX test/cpp_headers/nvme_zns.o 00:02:20.117 CXX test/cpp_headers/nvme_spec.o 00:02:20.117 CXX test/cpp_headers/nvmf_cmd.o 00:02:20.117 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:20.117 CC test/app/jsoncat/jsoncat.o 00:02:20.117 CXX test/cpp_headers/nvmf.o 00:02:20.117 CXX test/cpp_headers/nvmf_spec.o 00:02:20.117 CXX test/cpp_headers/opal.o 00:02:20.117 CXX test/cpp_headers/nvmf_transport.o 00:02:20.117 CC examples/ioat/verify/verify.o 00:02:20.117 CC test/env/pci/pci_ut.o 00:02:20.117 CC examples/ioat/perf/perf.o 00:02:20.117 CC test/app/stub/stub.o 00:02:20.117 CC test/app/histogram_perf/histogram_perf.o 00:02:20.117 CC test/env/memory/memory_ut.o 00:02:20.117 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:20.117 CC test/thread/poller_perf/poller_perf.o 00:02:20.117 CC app/fio/nvme/fio_plugin.o 00:02:20.117 CC test/env/vtophys/vtophys.o 00:02:20.117 CC test/app/bdev_svc/bdev_svc.o 00:02:20.117 CC examples/util/zipf/zipf.o 00:02:20.117 CC test/dma/test_dma/test_dma.o 00:02:20.383 LINK spdk_lspci 00:02:20.383 CC app/fio/bdev/fio_plugin.o 00:02:20.383 LINK spdk_nvme_discover 00:02:20.383 LINK interrupt_tgt 00:02:20.383 LINK rpc_client_test 00:02:20.644 LINK iscsi_tgt 00:02:20.644 LINK nvmf_tgt 00:02:20.644 CC test/env/mem_callbacks/mem_callbacks.o 00:02:20.644 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:20.644 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:20.644 LINK histogram_perf 00:02:20.644 LINK stub 00:02:20.644 LINK jsoncat 00:02:20.644 CXX test/cpp_headers/opal_spec.o 00:02:20.644 CXX test/cpp_headers/pci_ids.o 00:02:20.644 LINK bdev_svc 00:02:20.644 LINK spdk_tgt 00:02:20.644 CXX test/cpp_headers/pipe.o 00:02:20.644 LINK verify 00:02:20.644 LINK spdk_trace_record 00:02:20.644 CXX test/cpp_headers/queue.o 00:02:20.644 CXX test/cpp_headers/reduce.o 00:02:20.644 CXX test/cpp_headers/scheduler.o 00:02:20.644 CXX test/cpp_headers/scsi.o 00:02:20.644 CXX test/cpp_headers/rpc.o 00:02:20.644 CXX test/cpp_headers/scsi_spec.o 00:02:20.644 CXX test/cpp_headers/sock.o 00:02:20.644 LINK ioat_perf 00:02:20.644 CXX test/cpp_headers/stdinc.o 00:02:20.644 CXX test/cpp_headers/string.o 00:02:20.644 CXX test/cpp_headers/thread.o 00:02:20.644 CXX test/cpp_headers/trace.o 00:02:20.904 LINK zipf 00:02:20.904 LINK vtophys 00:02:20.904 LINK poller_perf 00:02:20.904 CXX test/cpp_headers/trace_parser.o 00:02:20.904 CXX test/cpp_headers/tree.o 00:02:20.904 CXX test/cpp_headers/ublk.o 00:02:20.904 CXX test/cpp_headers/util.o 00:02:20.904 CXX test/cpp_headers/uuid.o 00:02:20.904 CXX test/cpp_headers/version.o 00:02:20.904 CXX test/cpp_headers/vfio_user_pci.o 00:02:20.904 LINK env_dpdk_post_init 00:02:20.904 CXX test/cpp_headers/vfio_user_spec.o 00:02:20.904 CXX test/cpp_headers/vhost.o 00:02:20.904 CXX test/cpp_headers/vmd.o 00:02:20.904 CXX test/cpp_headers/xor.o 00:02:20.904 CXX test/cpp_headers/zipf.o 00:02:20.904 LINK spdk_dd 00:02:20.904 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:20.904 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:20.904 LINK pci_ut 00:02:20.904 LINK spdk_trace 00:02:21.163 LINK test_dma 00:02:21.163 LINK spdk_nvme 00:02:21.163 LINK spdk_bdev 00:02:21.163 LINK nvme_fuzz 00:02:21.163 LINK spdk_nvme_identify 00:02:21.163 CC examples/vmd/led/led.o 00:02:21.163 CC examples/vmd/lsvmd/lsvmd.o 00:02:21.163 LINK spdk_nvme_perf 00:02:21.163 CC examples/sock/hello_world/hello_sock.o 00:02:21.163 LINK spdk_top 00:02:21.163 CC examples/idxd/perf/perf.o 00:02:21.163 CC test/event/reactor/reactor.o 00:02:21.420 CC test/event/event_perf/event_perf.o 00:02:21.420 CC test/event/reactor_perf/reactor_perf.o 00:02:21.420 CC examples/thread/thread/thread_ex.o 00:02:21.420 CC test/event/app_repeat/app_repeat.o 00:02:21.420 LINK vhost_fuzz 00:02:21.420 CC test/event/scheduler/scheduler.o 00:02:21.420 LINK mem_callbacks 00:02:21.420 CC app/vhost/vhost.o 00:02:21.420 LINK lsvmd 00:02:21.420 LINK led 00:02:21.420 LINK reactor 00:02:21.420 LINK reactor_perf 00:02:21.420 LINK event_perf 00:02:21.420 LINK app_repeat 00:02:21.420 LINK hello_sock 00:02:21.678 LINK thread 00:02:21.678 LINK idxd_perf 00:02:21.678 LINK scheduler 00:02:21.678 CC test/nvme/compliance/nvme_compliance.o 00:02:21.678 CC test/nvme/cuse/cuse.o 00:02:21.678 CC test/nvme/reset/reset.o 00:02:21.678 CC test/nvme/overhead/overhead.o 00:02:21.678 CC test/nvme/aer/aer.o 00:02:21.678 LINK memory_ut 00:02:21.678 CC test/nvme/fused_ordering/fused_ordering.o 00:02:21.678 CC test/nvme/reserve/reserve.o 00:02:21.678 LINK vhost 00:02:21.678 CC test/nvme/boot_partition/boot_partition.o 00:02:21.678 CC test/nvme/connect_stress/connect_stress.o 00:02:21.678 CC test/nvme/startup/startup.o 00:02:21.678 CC test/nvme/sgl/sgl.o 00:02:21.678 CC test/nvme/err_injection/err_injection.o 00:02:21.678 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:21.678 CC test/nvme/e2edp/nvme_dp.o 00:02:21.678 CC test/nvme/fdp/fdp.o 00:02:21.678 CC test/nvme/simple_copy/simple_copy.o 00:02:21.678 CC test/blobfs/mkfs/mkfs.o 00:02:21.678 CC test/accel/dif/dif.o 00:02:21.678 CC test/lvol/esnap/esnap.o 00:02:21.678 LINK boot_partition 00:02:21.678 LINK startup 00:02:21.678 LINK connect_stress 00:02:21.678 LINK err_injection 00:02:21.678 LINK reserve 00:02:21.935 LINK doorbell_aers 00:02:21.935 LINK fused_ordering 00:02:21.935 LINK simple_copy 00:02:21.935 LINK reset 00:02:21.935 LINK sgl 00:02:21.935 LINK aer 00:02:21.935 LINK overhead 00:02:21.935 LINK nvme_dp 00:02:21.935 LINK nvme_compliance 00:02:21.935 LINK mkfs 00:02:21.935 LINK fdp 00:02:21.935 CC examples/nvme/hello_world/hello_world.o 00:02:21.935 CC examples/nvme/hotplug/hotplug.o 00:02:21.935 CC examples/nvme/arbitration/arbitration.o 00:02:21.935 CC examples/nvme/abort/abort.o 00:02:21.935 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:21.935 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:21.935 CC examples/nvme/reconnect/reconnect.o 00:02:21.935 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:21.935 CC examples/accel/perf/accel_perf.o 00:02:21.935 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:21.935 CC examples/blob/cli/blobcli.o 00:02:22.192 CC examples/blob/hello_world/hello_blob.o 00:02:22.192 LINK hello_world 00:02:22.192 LINK pmr_persistence 00:02:22.192 LINK cmb_copy 00:02:22.192 LINK hotplug 00:02:22.192 LINK dif 00:02:22.192 LINK iscsi_fuzz 00:02:22.192 LINK reconnect 00:02:22.192 LINK arbitration 00:02:22.192 LINK abort 00:02:22.192 LINK hello_fsdev 00:02:22.449 LINK hello_blob 00:02:22.449 LINK nvme_manage 00:02:22.449 LINK accel_perf 00:02:22.449 LINK blobcli 00:02:22.707 LINK cuse 00:02:22.707 CC test/bdev/bdevio/bdevio.o 00:02:22.964 CC examples/bdev/hello_world/hello_bdev.o 00:02:22.964 CC examples/bdev/bdevperf/bdevperf.o 00:02:22.964 LINK bdevio 00:02:22.964 LINK hello_bdev 00:02:23.533 LINK bdevperf 00:02:24.101 CC examples/nvmf/nvmf/nvmf.o 00:02:24.360 LINK nvmf 00:02:25.299 LINK esnap 00:02:25.560 00:02:25.560 real 0m55.169s 00:02:25.560 user 8m0.729s 00:02:25.560 sys 3m38.389s 00:02:25.560 06:58:30 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:25.560 06:58:30 make -- common/autotest_common.sh@10 -- $ set +x 00:02:25.560 ************************************ 00:02:25.560 END TEST make 00:02:25.560 ************************************ 00:02:25.560 06:58:30 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:25.560 06:58:30 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:25.560 06:58:30 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:25.560 06:58:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.560 06:58:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:25.560 06:58:30 -- pm/common@44 -- $ pid=914165 00:02:25.560 06:58:30 -- pm/common@50 -- $ kill -TERM 914165 00:02:25.560 06:58:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.560 06:58:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:25.560 06:58:30 -- pm/common@44 -- $ pid=914167 00:02:25.560 06:58:30 -- pm/common@50 -- $ kill -TERM 914167 00:02:25.560 06:58:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.560 06:58:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:25.560 06:58:30 -- pm/common@44 -- $ pid=914170 00:02:25.560 06:58:30 -- pm/common@50 -- $ kill -TERM 914170 00:02:25.560 06:58:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.560 06:58:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:25.560 06:58:30 -- pm/common@44 -- $ pid=914203 00:02:25.560 06:58:30 -- pm/common@50 -- $ sudo -E kill -TERM 914203 00:02:25.820 06:58:30 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:25.821 06:58:30 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:25.821 06:58:30 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:02:25.821 06:58:30 -- common/autotest_common.sh@1691 -- # lcov --version 00:02:25.821 06:58:30 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:02:25.821 06:58:30 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:02:25.821 06:58:30 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:25.821 06:58:30 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:25.821 06:58:30 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:25.821 06:58:30 -- scripts/common.sh@336 -- # IFS=.-: 00:02:25.821 06:58:30 -- scripts/common.sh@336 -- # read -ra ver1 00:02:25.821 06:58:30 -- scripts/common.sh@337 -- # IFS=.-: 00:02:25.821 06:58:30 -- scripts/common.sh@337 -- # read -ra ver2 00:02:25.821 06:58:30 -- scripts/common.sh@338 -- # local 'op=<' 00:02:25.821 06:58:30 -- scripts/common.sh@340 -- # ver1_l=2 00:02:25.821 06:58:30 -- scripts/common.sh@341 -- # ver2_l=1 00:02:25.821 06:58:30 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:25.821 06:58:30 -- scripts/common.sh@344 -- # case "$op" in 00:02:25.821 06:58:30 -- scripts/common.sh@345 -- # : 1 00:02:25.821 06:58:30 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:25.821 06:58:30 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:25.821 06:58:30 -- scripts/common.sh@365 -- # decimal 1 00:02:25.821 06:58:30 -- scripts/common.sh@353 -- # local d=1 00:02:25.821 06:58:30 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:25.821 06:58:30 -- scripts/common.sh@355 -- # echo 1 00:02:25.821 06:58:30 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:25.821 06:58:30 -- scripts/common.sh@366 -- # decimal 2 00:02:25.821 06:58:30 -- scripts/common.sh@353 -- # local d=2 00:02:25.821 06:58:30 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:25.821 06:58:30 -- scripts/common.sh@355 -- # echo 2 00:02:25.821 06:58:30 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:25.821 06:58:30 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:25.821 06:58:30 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:25.821 06:58:30 -- scripts/common.sh@368 -- # return 0 00:02:25.821 06:58:30 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:25.821 06:58:30 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:02:25.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:25.821 --rc genhtml_branch_coverage=1 00:02:25.821 --rc genhtml_function_coverage=1 00:02:25.821 --rc genhtml_legend=1 00:02:25.821 --rc geninfo_all_blocks=1 00:02:25.821 --rc geninfo_unexecuted_blocks=1 00:02:25.821 00:02:25.821 ' 00:02:25.821 06:58:30 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:02:25.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:25.821 --rc genhtml_branch_coverage=1 00:02:25.821 --rc genhtml_function_coverage=1 00:02:25.821 --rc genhtml_legend=1 00:02:25.821 --rc geninfo_all_blocks=1 00:02:25.821 --rc geninfo_unexecuted_blocks=1 00:02:25.821 00:02:25.821 ' 00:02:25.821 06:58:30 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:02:25.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:25.821 --rc genhtml_branch_coverage=1 00:02:25.821 --rc genhtml_function_coverage=1 00:02:25.821 --rc genhtml_legend=1 00:02:25.821 --rc geninfo_all_blocks=1 00:02:25.821 --rc geninfo_unexecuted_blocks=1 00:02:25.821 00:02:25.821 ' 00:02:25.821 06:58:30 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:02:25.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:25.821 --rc genhtml_branch_coverage=1 00:02:25.821 --rc genhtml_function_coverage=1 00:02:25.821 --rc genhtml_legend=1 00:02:25.821 --rc geninfo_all_blocks=1 00:02:25.821 --rc geninfo_unexecuted_blocks=1 00:02:25.821 00:02:25.821 ' 00:02:25.821 06:58:30 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:25.821 06:58:30 -- nvmf/common.sh@7 -- # uname -s 00:02:25.821 06:58:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:25.821 06:58:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:25.821 06:58:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:25.821 06:58:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:25.821 06:58:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:25.821 06:58:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:25.821 06:58:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:25.821 06:58:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:25.821 06:58:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:25.821 06:58:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:25.821 06:58:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:02:25.821 06:58:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:02:25.821 06:58:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:25.821 06:58:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:25.821 06:58:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:25.821 06:58:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:25.821 06:58:30 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:25.821 06:58:30 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:25.821 06:58:30 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:25.821 06:58:30 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:25.821 06:58:30 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:25.821 06:58:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.821 06:58:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.821 06:58:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.821 06:58:30 -- paths/export.sh@5 -- # export PATH 00:02:25.821 06:58:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:25.821 06:58:30 -- nvmf/common.sh@51 -- # : 0 00:02:25.821 06:58:30 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:25.821 06:58:30 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:25.821 06:58:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:25.821 06:58:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:25.821 06:58:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:25.821 06:58:30 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:25.821 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:25.821 06:58:30 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:25.821 06:58:30 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:25.821 06:58:30 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:25.821 06:58:30 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:25.821 06:58:30 -- spdk/autotest.sh@32 -- # uname -s 00:02:25.821 06:58:30 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:25.821 06:58:30 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:25.821 06:58:30 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:25.821 06:58:30 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:25.821 06:58:30 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:25.821 06:58:30 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:25.821 06:58:30 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:25.821 06:58:30 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:25.821 06:58:30 -- spdk/autotest.sh@48 -- # udevadm_pid=976658 00:02:25.821 06:58:30 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:25.821 06:58:30 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:25.821 06:58:30 -- pm/common@17 -- # local monitor 00:02:25.821 06:58:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.821 06:58:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.821 06:58:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.821 06:58:30 -- pm/common@21 -- # date +%s 00:02:25.821 06:58:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:25.821 06:58:30 -- pm/common@21 -- # date +%s 00:02:25.821 06:58:30 -- pm/common@25 -- # sleep 1 00:02:25.821 06:58:30 -- pm/common@21 -- # date +%s 00:02:25.821 06:58:30 -- pm/common@21 -- # date +%s 00:02:25.821 06:58:30 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732082310 00:02:25.821 06:58:30 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732082310 00:02:25.821 06:58:30 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732082310 00:02:25.821 06:58:30 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732082310 00:02:26.082 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732082310_collect-vmstat.pm.log 00:02:26.082 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732082310_collect-cpu-load.pm.log 00:02:26.082 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732082310_collect-cpu-temp.pm.log 00:02:26.082 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732082310_collect-bmc-pm.bmc.pm.log 00:02:27.020 06:58:31 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:27.020 06:58:31 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:27.020 06:58:31 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:27.020 06:58:31 -- common/autotest_common.sh@10 -- # set +x 00:02:27.020 06:58:31 -- spdk/autotest.sh@59 -- # create_test_list 00:02:27.020 06:58:31 -- common/autotest_common.sh@750 -- # xtrace_disable 00:02:27.020 06:58:31 -- common/autotest_common.sh@10 -- # set +x 00:02:27.020 06:58:31 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:27.020 06:58:31 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:27.020 06:58:31 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:27.020 06:58:31 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:27.020 06:58:31 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:27.020 06:58:31 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:27.020 06:58:31 -- common/autotest_common.sh@1455 -- # uname 00:02:27.020 06:58:31 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:27.020 06:58:31 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:27.020 06:58:31 -- common/autotest_common.sh@1475 -- # uname 00:02:27.020 06:58:31 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:27.020 06:58:31 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:27.020 06:58:31 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:27.020 lcov: LCOV version 1.15 00:02:27.020 06:58:31 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:39.246 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:39.246 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:54.141 06:58:56 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:02:54.141 06:58:56 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:54.141 06:58:56 -- common/autotest_common.sh@10 -- # set +x 00:02:54.141 06:58:56 -- spdk/autotest.sh@78 -- # rm -f 00:02:54.141 06:58:56 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:55.081 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:55.081 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:55.081 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:55.081 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:55.081 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:55.081 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:55.081 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:55.081 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:55.081 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:55.081 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:55.081 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:55.081 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:55.081 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:55.081 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:55.081 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:55.081 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:55.340 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:55.340 06:58:59 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:02:55.340 06:58:59 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:55.340 06:58:59 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:55.340 06:58:59 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:55.340 06:58:59 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:55.340 06:58:59 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:55.340 06:58:59 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:55.340 06:58:59 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:55.340 06:58:59 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:55.340 06:58:59 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:02:55.340 06:58:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:02:55.340 06:58:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:02:55.340 06:58:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:02:55.340 06:58:59 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:02:55.340 06:58:59 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:55.340 No valid GPT data, bailing 00:02:55.340 06:58:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:55.340 06:58:59 -- scripts/common.sh@394 -- # pt= 00:02:55.340 06:58:59 -- scripts/common.sh@395 -- # return 1 00:02:55.340 06:58:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:55.340 1+0 records in 00:02:55.340 1+0 records out 00:02:55.340 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00185534 s, 565 MB/s 00:02:55.340 06:58:59 -- spdk/autotest.sh@105 -- # sync 00:02:55.340 06:58:59 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:55.340 06:58:59 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:55.340 06:58:59 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:01.919 06:59:05 -- spdk/autotest.sh@111 -- # uname -s 00:03:01.919 06:59:05 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:01.919 06:59:05 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:01.919 06:59:05 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:03.833 Hugepages 00:03:03.833 node hugesize free / total 00:03:03.833 node0 1048576kB 0 / 0 00:03:03.833 node0 2048kB 0 / 0 00:03:03.833 node1 1048576kB 0 / 0 00:03:03.833 node1 2048kB 0 / 0 00:03:03.833 00:03:03.833 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:03.833 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:03.833 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:03.833 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:03.833 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:03.833 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:03.833 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:03.833 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:03.833 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:03.833 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:03.833 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:03.833 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:03.833 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:03.833 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:03.833 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:03.833 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:03.833 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:03.833 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:03.833 06:59:08 -- spdk/autotest.sh@117 -- # uname -s 00:03:03.833 06:59:08 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:03.833 06:59:08 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:03.833 06:59:08 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:07.130 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:07.130 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:07.130 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:07.130 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:07.130 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:07.130 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:07.130 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:07.130 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:07.130 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:07.130 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:07.130 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:07.130 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:07.130 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:07.130 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:07.130 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:07.130 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:07.700 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:07.700 06:59:12 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:08.641 06:59:13 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:08.641 06:59:13 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:08.641 06:59:13 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:08.641 06:59:13 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:08.641 06:59:13 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:08.641 06:59:13 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:08.641 06:59:13 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:08.641 06:59:13 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:08.641 06:59:13 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:08.901 06:59:13 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:08.901 06:59:13 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:03:08.901 06:59:13 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:11.444 Waiting for block devices as requested 00:03:11.704 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:11.704 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:11.704 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:11.975 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:11.975 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:11.975 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:12.235 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:12.235 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:12.235 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:12.235 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:12.495 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:12.495 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:12.495 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:12.754 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:12.754 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:12.754 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:13.014 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:13.014 06:59:17 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:13.014 06:59:17 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:13.014 06:59:17 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:03:13.014 06:59:17 -- common/autotest_common.sh@1485 -- # grep 0000:5e:00.0/nvme/nvme 00:03:13.014 06:59:17 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:13.014 06:59:17 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:13.014 06:59:17 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:13.014 06:59:17 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:03:13.014 06:59:17 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:03:13.014 06:59:17 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:03:13.014 06:59:17 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:03:13.014 06:59:17 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:13.014 06:59:17 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:13.014 06:59:17 -- common/autotest_common.sh@1529 -- # oacs=' 0xe' 00:03:13.014 06:59:17 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:13.014 06:59:17 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:13.014 06:59:17 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:03:13.014 06:59:17 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:13.014 06:59:17 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:13.014 06:59:17 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:13.014 06:59:17 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:13.014 06:59:17 -- common/autotest_common.sh@1541 -- # continue 00:03:13.014 06:59:17 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:13.014 06:59:17 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:13.014 06:59:17 -- common/autotest_common.sh@10 -- # set +x 00:03:13.014 06:59:17 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:13.015 06:59:17 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:13.015 06:59:17 -- common/autotest_common.sh@10 -- # set +x 00:03:13.015 06:59:17 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:16.308 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:16.308 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:16.308 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:16.308 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:16.308 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:16.308 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:16.308 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:16.309 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:16.309 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:16.309 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:16.309 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:16.309 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:16.309 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:16.309 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:16.309 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:16.309 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:16.944 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:16.944 06:59:21 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:16.944 06:59:21 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:16.944 06:59:21 -- common/autotest_common.sh@10 -- # set +x 00:03:16.944 06:59:21 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:16.944 06:59:21 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:16.944 06:59:21 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:16.944 06:59:21 -- common/autotest_common.sh@1561 -- # bdfs=() 00:03:16.944 06:59:21 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:03:16.944 06:59:21 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:03:16.944 06:59:21 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:03:16.944 06:59:21 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:03:16.944 06:59:21 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:16.944 06:59:21 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:16.945 06:59:21 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:16.945 06:59:21 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:16.945 06:59:21 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:17.230 06:59:21 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:17.230 06:59:21 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:03:17.230 06:59:21 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:17.230 06:59:21 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:17.230 06:59:21 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:03:17.230 06:59:21 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:17.230 06:59:21 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:03:17.230 06:59:21 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:03:17.230 06:59:21 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:5e:00.0 00:03:17.230 06:59:21 -- common/autotest_common.sh@1577 -- # [[ -z 0000:5e:00.0 ]] 00:03:17.230 06:59:21 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=991081 00:03:17.230 06:59:21 -- common/autotest_common.sh@1583 -- # waitforlisten 991081 00:03:17.230 06:59:21 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:17.230 06:59:21 -- common/autotest_common.sh@833 -- # '[' -z 991081 ']' 00:03:17.230 06:59:21 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:17.230 06:59:21 -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:17.230 06:59:21 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:17.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:17.230 06:59:21 -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:17.230 06:59:21 -- common/autotest_common.sh@10 -- # set +x 00:03:17.230 [2024-11-20 06:59:21.593251] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:03:17.230 [2024-11-20 06:59:21.593304] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid991081 ] 00:03:17.230 [2024-11-20 06:59:21.671958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:17.230 [2024-11-20 06:59:21.713073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:17.509 06:59:21 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:17.509 06:59:21 -- common/autotest_common.sh@866 -- # return 0 00:03:17.509 06:59:21 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:03:17.509 06:59:21 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:03:17.509 06:59:21 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:03:20.877 nvme0n1 00:03:20.877 06:59:24 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:20.877 [2024-11-20 06:59:25.118886] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:03:20.877 request: 00:03:20.877 { 00:03:20.877 "nvme_ctrlr_name": "nvme0", 00:03:20.877 "password": "test", 00:03:20.877 "method": "bdev_nvme_opal_revert", 00:03:20.877 "req_id": 1 00:03:20.877 } 00:03:20.877 Got JSON-RPC error response 00:03:20.877 response: 00:03:20.878 { 00:03:20.878 "code": -32602, 00:03:20.878 "message": "Invalid parameters" 00:03:20.878 } 00:03:20.878 06:59:25 -- common/autotest_common.sh@1589 -- # true 00:03:20.878 06:59:25 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:03:20.878 06:59:25 -- common/autotest_common.sh@1593 -- # killprocess 991081 00:03:20.878 06:59:25 -- common/autotest_common.sh@952 -- # '[' -z 991081 ']' 00:03:20.878 06:59:25 -- common/autotest_common.sh@956 -- # kill -0 991081 00:03:20.878 06:59:25 -- common/autotest_common.sh@957 -- # uname 00:03:20.878 06:59:25 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:20.878 06:59:25 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 991081 00:03:20.878 06:59:25 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:20.878 06:59:25 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:20.878 06:59:25 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 991081' 00:03:20.878 killing process with pid 991081 00:03:20.878 06:59:25 -- common/autotest_common.sh@971 -- # kill 991081 00:03:20.878 06:59:25 -- common/autotest_common.sh@976 -- # wait 991081 00:03:22.780 06:59:26 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:22.780 06:59:26 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:22.780 06:59:26 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:22.780 06:59:26 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:22.780 06:59:26 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:22.780 06:59:26 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:22.780 06:59:26 -- common/autotest_common.sh@10 -- # set +x 00:03:22.780 06:59:26 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:22.780 06:59:26 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:22.780 06:59:26 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:22.780 06:59:26 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:22.780 06:59:26 -- common/autotest_common.sh@10 -- # set +x 00:03:22.780 ************************************ 00:03:22.780 START TEST env 00:03:22.780 ************************************ 00:03:22.780 06:59:26 env -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:22.781 * Looking for test storage... 00:03:22.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:22.781 06:59:26 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:22.781 06:59:26 env -- common/autotest_common.sh@1691 -- # lcov --version 00:03:22.781 06:59:26 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:22.781 06:59:27 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:22.781 06:59:27 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:22.781 06:59:27 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:22.781 06:59:27 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:22.781 06:59:27 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:22.781 06:59:27 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:22.781 06:59:27 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:22.781 06:59:27 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:22.781 06:59:27 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:22.781 06:59:27 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:22.781 06:59:27 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:22.781 06:59:27 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:22.781 06:59:27 env -- scripts/common.sh@344 -- # case "$op" in 00:03:22.781 06:59:27 env -- scripts/common.sh@345 -- # : 1 00:03:22.781 06:59:27 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:22.781 06:59:27 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:22.781 06:59:27 env -- scripts/common.sh@365 -- # decimal 1 00:03:22.781 06:59:27 env -- scripts/common.sh@353 -- # local d=1 00:03:22.781 06:59:27 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:22.781 06:59:27 env -- scripts/common.sh@355 -- # echo 1 00:03:22.781 06:59:27 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:22.781 06:59:27 env -- scripts/common.sh@366 -- # decimal 2 00:03:22.781 06:59:27 env -- scripts/common.sh@353 -- # local d=2 00:03:22.781 06:59:27 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:22.781 06:59:27 env -- scripts/common.sh@355 -- # echo 2 00:03:22.781 06:59:27 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:22.781 06:59:27 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:22.781 06:59:27 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:22.781 06:59:27 env -- scripts/common.sh@368 -- # return 0 00:03:22.781 06:59:27 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:22.781 06:59:27 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:22.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.781 --rc genhtml_branch_coverage=1 00:03:22.781 --rc genhtml_function_coverage=1 00:03:22.781 --rc genhtml_legend=1 00:03:22.781 --rc geninfo_all_blocks=1 00:03:22.781 --rc geninfo_unexecuted_blocks=1 00:03:22.781 00:03:22.781 ' 00:03:22.781 06:59:27 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:22.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.781 --rc genhtml_branch_coverage=1 00:03:22.781 --rc genhtml_function_coverage=1 00:03:22.781 --rc genhtml_legend=1 00:03:22.781 --rc geninfo_all_blocks=1 00:03:22.781 --rc geninfo_unexecuted_blocks=1 00:03:22.781 00:03:22.781 ' 00:03:22.781 06:59:27 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:22.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.781 --rc genhtml_branch_coverage=1 00:03:22.781 --rc genhtml_function_coverage=1 00:03:22.781 --rc genhtml_legend=1 00:03:22.781 --rc geninfo_all_blocks=1 00:03:22.781 --rc geninfo_unexecuted_blocks=1 00:03:22.781 00:03:22.781 ' 00:03:22.781 06:59:27 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:22.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.781 --rc genhtml_branch_coverage=1 00:03:22.781 --rc genhtml_function_coverage=1 00:03:22.781 --rc genhtml_legend=1 00:03:22.781 --rc geninfo_all_blocks=1 00:03:22.781 --rc geninfo_unexecuted_blocks=1 00:03:22.781 00:03:22.781 ' 00:03:22.781 06:59:27 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:22.781 06:59:27 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:22.781 06:59:27 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:22.781 06:59:27 env -- common/autotest_common.sh@10 -- # set +x 00:03:22.781 ************************************ 00:03:22.781 START TEST env_memory 00:03:22.781 ************************************ 00:03:22.781 06:59:27 env.env_memory -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:22.781 00:03:22.781 00:03:22.781 CUnit - A unit testing framework for C - Version 2.1-3 00:03:22.781 http://cunit.sourceforge.net/ 00:03:22.781 00:03:22.781 00:03:22.781 Suite: memory 00:03:22.781 Test: alloc and free memory map ...[2024-11-20 06:59:27.137043] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:22.781 passed 00:03:22.781 Test: mem map translation ...[2024-11-20 06:59:27.156834] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:22.781 [2024-11-20 06:59:27.156850] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:22.781 [2024-11-20 06:59:27.156887] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:22.781 [2024-11-20 06:59:27.156894] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:22.781 passed 00:03:22.781 Test: mem map registration ...[2024-11-20 06:59:27.195244] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:22.781 [2024-11-20 06:59:27.195266] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:22.781 passed 00:03:22.781 Test: mem map adjacent registrations ...passed 00:03:22.781 00:03:22.781 Run Summary: Type Total Ran Passed Failed Inactive 00:03:22.781 suites 1 1 n/a 0 0 00:03:22.781 tests 4 4 4 0 0 00:03:22.781 asserts 152 152 152 0 n/a 00:03:22.781 00:03:22.781 Elapsed time = 0.137 seconds 00:03:22.781 00:03:22.781 real 0m0.150s 00:03:22.781 user 0m0.140s 00:03:22.781 sys 0m0.010s 00:03:22.781 06:59:27 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:22.781 06:59:27 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:22.781 ************************************ 00:03:22.781 END TEST env_memory 00:03:22.781 ************************************ 00:03:22.781 06:59:27 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:22.781 06:59:27 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:22.781 06:59:27 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:22.781 06:59:27 env -- common/autotest_common.sh@10 -- # set +x 00:03:22.781 ************************************ 00:03:22.781 START TEST env_vtophys 00:03:22.781 ************************************ 00:03:22.781 06:59:27 env.env_vtophys -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:23.041 EAL: lib.eal log level changed from notice to debug 00:03:23.041 EAL: Detected lcore 0 as core 0 on socket 0 00:03:23.041 EAL: Detected lcore 1 as core 1 on socket 0 00:03:23.041 EAL: Detected lcore 2 as core 2 on socket 0 00:03:23.041 EAL: Detected lcore 3 as core 3 on socket 0 00:03:23.041 EAL: Detected lcore 4 as core 4 on socket 0 00:03:23.041 EAL: Detected lcore 5 as core 5 on socket 0 00:03:23.041 EAL: Detected lcore 6 as core 6 on socket 0 00:03:23.041 EAL: Detected lcore 7 as core 8 on socket 0 00:03:23.041 EAL: Detected lcore 8 as core 9 on socket 0 00:03:23.041 EAL: Detected lcore 9 as core 10 on socket 0 00:03:23.041 EAL: Detected lcore 10 as core 11 on socket 0 00:03:23.041 EAL: Detected lcore 11 as core 12 on socket 0 00:03:23.041 EAL: Detected lcore 12 as core 13 on socket 0 00:03:23.041 EAL: Detected lcore 13 as core 16 on socket 0 00:03:23.041 EAL: Detected lcore 14 as core 17 on socket 0 00:03:23.041 EAL: Detected lcore 15 as core 18 on socket 0 00:03:23.041 EAL: Detected lcore 16 as core 19 on socket 0 00:03:23.041 EAL: Detected lcore 17 as core 20 on socket 0 00:03:23.041 EAL: Detected lcore 18 as core 21 on socket 0 00:03:23.041 EAL: Detected lcore 19 as core 25 on socket 0 00:03:23.041 EAL: Detected lcore 20 as core 26 on socket 0 00:03:23.041 EAL: Detected lcore 21 as core 27 on socket 0 00:03:23.041 EAL: Detected lcore 22 as core 28 on socket 0 00:03:23.041 EAL: Detected lcore 23 as core 29 on socket 0 00:03:23.041 EAL: Detected lcore 24 as core 0 on socket 1 00:03:23.042 EAL: Detected lcore 25 as core 1 on socket 1 00:03:23.042 EAL: Detected lcore 26 as core 2 on socket 1 00:03:23.042 EAL: Detected lcore 27 as core 3 on socket 1 00:03:23.042 EAL: Detected lcore 28 as core 4 on socket 1 00:03:23.042 EAL: Detected lcore 29 as core 5 on socket 1 00:03:23.042 EAL: Detected lcore 30 as core 6 on socket 1 00:03:23.042 EAL: Detected lcore 31 as core 9 on socket 1 00:03:23.042 EAL: Detected lcore 32 as core 10 on socket 1 00:03:23.042 EAL: Detected lcore 33 as core 11 on socket 1 00:03:23.042 EAL: Detected lcore 34 as core 12 on socket 1 00:03:23.042 EAL: Detected lcore 35 as core 13 on socket 1 00:03:23.042 EAL: Detected lcore 36 as core 16 on socket 1 00:03:23.042 EAL: Detected lcore 37 as core 17 on socket 1 00:03:23.042 EAL: Detected lcore 38 as core 18 on socket 1 00:03:23.042 EAL: Detected lcore 39 as core 19 on socket 1 00:03:23.042 EAL: Detected lcore 40 as core 20 on socket 1 00:03:23.042 EAL: Detected lcore 41 as core 21 on socket 1 00:03:23.042 EAL: Detected lcore 42 as core 24 on socket 1 00:03:23.042 EAL: Detected lcore 43 as core 25 on socket 1 00:03:23.042 EAL: Detected lcore 44 as core 26 on socket 1 00:03:23.042 EAL: Detected lcore 45 as core 27 on socket 1 00:03:23.042 EAL: Detected lcore 46 as core 28 on socket 1 00:03:23.042 EAL: Detected lcore 47 as core 29 on socket 1 00:03:23.042 EAL: Detected lcore 48 as core 0 on socket 0 00:03:23.042 EAL: Detected lcore 49 as core 1 on socket 0 00:03:23.042 EAL: Detected lcore 50 as core 2 on socket 0 00:03:23.042 EAL: Detected lcore 51 as core 3 on socket 0 00:03:23.042 EAL: Detected lcore 52 as core 4 on socket 0 00:03:23.042 EAL: Detected lcore 53 as core 5 on socket 0 00:03:23.042 EAL: Detected lcore 54 as core 6 on socket 0 00:03:23.042 EAL: Detected lcore 55 as core 8 on socket 0 00:03:23.042 EAL: Detected lcore 56 as core 9 on socket 0 00:03:23.042 EAL: Detected lcore 57 as core 10 on socket 0 00:03:23.042 EAL: Detected lcore 58 as core 11 on socket 0 00:03:23.042 EAL: Detected lcore 59 as core 12 on socket 0 00:03:23.042 EAL: Detected lcore 60 as core 13 on socket 0 00:03:23.042 EAL: Detected lcore 61 as core 16 on socket 0 00:03:23.042 EAL: Detected lcore 62 as core 17 on socket 0 00:03:23.042 EAL: Detected lcore 63 as core 18 on socket 0 00:03:23.042 EAL: Detected lcore 64 as core 19 on socket 0 00:03:23.042 EAL: Detected lcore 65 as core 20 on socket 0 00:03:23.042 EAL: Detected lcore 66 as core 21 on socket 0 00:03:23.042 EAL: Detected lcore 67 as core 25 on socket 0 00:03:23.042 EAL: Detected lcore 68 as core 26 on socket 0 00:03:23.042 EAL: Detected lcore 69 as core 27 on socket 0 00:03:23.042 EAL: Detected lcore 70 as core 28 on socket 0 00:03:23.042 EAL: Detected lcore 71 as core 29 on socket 0 00:03:23.042 EAL: Detected lcore 72 as core 0 on socket 1 00:03:23.042 EAL: Detected lcore 73 as core 1 on socket 1 00:03:23.042 EAL: Detected lcore 74 as core 2 on socket 1 00:03:23.042 EAL: Detected lcore 75 as core 3 on socket 1 00:03:23.042 EAL: Detected lcore 76 as core 4 on socket 1 00:03:23.042 EAL: Detected lcore 77 as core 5 on socket 1 00:03:23.042 EAL: Detected lcore 78 as core 6 on socket 1 00:03:23.042 EAL: Detected lcore 79 as core 9 on socket 1 00:03:23.042 EAL: Detected lcore 80 as core 10 on socket 1 00:03:23.042 EAL: Detected lcore 81 as core 11 on socket 1 00:03:23.042 EAL: Detected lcore 82 as core 12 on socket 1 00:03:23.042 EAL: Detected lcore 83 as core 13 on socket 1 00:03:23.042 EAL: Detected lcore 84 as core 16 on socket 1 00:03:23.042 EAL: Detected lcore 85 as core 17 on socket 1 00:03:23.042 EAL: Detected lcore 86 as core 18 on socket 1 00:03:23.042 EAL: Detected lcore 87 as core 19 on socket 1 00:03:23.042 EAL: Detected lcore 88 as core 20 on socket 1 00:03:23.042 EAL: Detected lcore 89 as core 21 on socket 1 00:03:23.042 EAL: Detected lcore 90 as core 24 on socket 1 00:03:23.042 EAL: Detected lcore 91 as core 25 on socket 1 00:03:23.042 EAL: Detected lcore 92 as core 26 on socket 1 00:03:23.042 EAL: Detected lcore 93 as core 27 on socket 1 00:03:23.042 EAL: Detected lcore 94 as core 28 on socket 1 00:03:23.042 EAL: Detected lcore 95 as core 29 on socket 1 00:03:23.042 EAL: Maximum logical cores by configuration: 128 00:03:23.042 EAL: Detected CPU lcores: 96 00:03:23.042 EAL: Detected NUMA nodes: 2 00:03:23.042 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:23.042 EAL: Detected shared linkage of DPDK 00:03:23.042 EAL: No shared files mode enabled, IPC will be disabled 00:03:23.042 EAL: Bus pci wants IOVA as 'DC' 00:03:23.042 EAL: Buses did not request a specific IOVA mode. 00:03:23.042 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:23.042 EAL: Selected IOVA mode 'VA' 00:03:23.042 EAL: Probing VFIO support... 00:03:23.042 EAL: IOMMU type 1 (Type 1) is supported 00:03:23.042 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:23.042 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:23.042 EAL: VFIO support initialized 00:03:23.042 EAL: Ask a virtual area of 0x2e000 bytes 00:03:23.042 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:23.042 EAL: Setting up physically contiguous memory... 00:03:23.042 EAL: Setting maximum number of open files to 524288 00:03:23.042 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:23.042 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:23.042 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:23.042 EAL: Ask a virtual area of 0x61000 bytes 00:03:23.042 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:23.042 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:23.042 EAL: Ask a virtual area of 0x400000000 bytes 00:03:23.042 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:23.042 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:23.042 EAL: Ask a virtual area of 0x61000 bytes 00:03:23.042 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:23.042 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:23.042 EAL: Ask a virtual area of 0x400000000 bytes 00:03:23.042 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:23.042 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:23.042 EAL: Ask a virtual area of 0x61000 bytes 00:03:23.042 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:23.042 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:23.042 EAL: Ask a virtual area of 0x400000000 bytes 00:03:23.042 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:23.042 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:23.042 EAL: Ask a virtual area of 0x61000 bytes 00:03:23.042 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:23.042 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:23.042 EAL: Ask a virtual area of 0x400000000 bytes 00:03:23.042 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:23.042 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:23.042 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:23.042 EAL: Ask a virtual area of 0x61000 bytes 00:03:23.042 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:23.042 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:23.042 EAL: Ask a virtual area of 0x400000000 bytes 00:03:23.042 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:23.042 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:23.042 EAL: Ask a virtual area of 0x61000 bytes 00:03:23.042 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:23.042 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:23.042 EAL: Ask a virtual area of 0x400000000 bytes 00:03:23.042 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:23.042 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:23.042 EAL: Ask a virtual area of 0x61000 bytes 00:03:23.042 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:23.042 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:23.042 EAL: Ask a virtual area of 0x400000000 bytes 00:03:23.042 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:23.042 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:23.042 EAL: Ask a virtual area of 0x61000 bytes 00:03:23.042 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:23.042 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:23.042 EAL: Ask a virtual area of 0x400000000 bytes 00:03:23.042 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:23.042 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:23.042 EAL: Hugepages will be freed exactly as allocated. 00:03:23.042 EAL: No shared files mode enabled, IPC is disabled 00:03:23.042 EAL: No shared files mode enabled, IPC is disabled 00:03:23.042 EAL: TSC frequency is ~2300000 KHz 00:03:23.042 EAL: Main lcore 0 is ready (tid=7f70d02e3a00;cpuset=[0]) 00:03:23.042 EAL: Trying to obtain current memory policy. 00:03:23.042 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:23.042 EAL: Restoring previous memory policy: 0 00:03:23.042 EAL: request: mp_malloc_sync 00:03:23.042 EAL: No shared files mode enabled, IPC is disabled 00:03:23.042 EAL: Heap on socket 0 was expanded by 2MB 00:03:23.042 EAL: No shared files mode enabled, IPC is disabled 00:03:23.042 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:23.042 EAL: Mem event callback 'spdk:(nil)' registered 00:03:23.042 00:03:23.042 00:03:23.042 CUnit - A unit testing framework for C - Version 2.1-3 00:03:23.042 http://cunit.sourceforge.net/ 00:03:23.042 00:03:23.042 00:03:23.042 Suite: components_suite 00:03:23.042 Test: vtophys_malloc_test ...passed 00:03:23.042 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:23.042 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:23.042 EAL: Restoring previous memory policy: 4 00:03:23.042 EAL: Calling mem event callback 'spdk:(nil)' 00:03:23.042 EAL: request: mp_malloc_sync 00:03:23.042 EAL: No shared files mode enabled, IPC is disabled 00:03:23.042 EAL: Heap on socket 0 was expanded by 4MB 00:03:23.042 EAL: Calling mem event callback 'spdk:(nil)' 00:03:23.042 EAL: request: mp_malloc_sync 00:03:23.042 EAL: No shared files mode enabled, IPC is disabled 00:03:23.042 EAL: Heap on socket 0 was shrunk by 4MB 00:03:23.042 EAL: Trying to obtain current memory policy. 00:03:23.042 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:23.042 EAL: Restoring previous memory policy: 4 00:03:23.042 EAL: Calling mem event callback 'spdk:(nil)' 00:03:23.042 EAL: request: mp_malloc_sync 00:03:23.042 EAL: No shared files mode enabled, IPC is disabled 00:03:23.042 EAL: Heap on socket 0 was expanded by 6MB 00:03:23.042 EAL: Calling mem event callback 'spdk:(nil)' 00:03:23.042 EAL: request: mp_malloc_sync 00:03:23.043 EAL: No shared files mode enabled, IPC is disabled 00:03:23.043 EAL: Heap on socket 0 was shrunk by 6MB 00:03:23.043 EAL: Trying to obtain current memory policy. 00:03:23.043 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:23.043 EAL: Restoring previous memory policy: 4 00:03:23.043 EAL: Calling mem event callback 'spdk:(nil)' 00:03:23.043 EAL: request: mp_malloc_sync 00:03:23.043 EAL: No shared files mode enabled, IPC is disabled 00:03:23.043 EAL: Heap on socket 0 was expanded by 10MB 00:03:23.043 EAL: Calling mem event callback 'spdk:(nil)' 00:03:23.043 EAL: request: mp_malloc_sync 00:03:23.043 EAL: No shared files mode enabled, IPC is disabled 00:03:23.043 EAL: Heap on socket 0 was shrunk by 10MB 00:03:23.043 EAL: Trying to obtain current memory policy. 00:03:23.043 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:23.043 EAL: Restoring previous memory policy: 4 00:03:23.043 EAL: Calling mem event callback 'spdk:(nil)' 00:03:23.043 EAL: request: mp_malloc_sync 00:03:23.043 EAL: No shared files mode enabled, IPC is disabled 00:03:23.043 EAL: Heap on socket 0 was expanded by 18MB 00:03:23.043 EAL: Calling mem event callback 'spdk:(nil)' 00:03:23.043 EAL: request: mp_malloc_sync 00:03:23.043 EAL: No shared files mode enabled, IPC is disabled 00:03:23.043 EAL: Heap on socket 0 was shrunk by 18MB 00:03:23.043 EAL: Trying to obtain current memory policy. 00:03:23.043 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:23.043 EAL: Restoring previous memory policy: 4 00:03:23.043 EAL: Calling mem event callback 'spdk:(nil)' 00:03:23.043 EAL: request: mp_malloc_sync 00:03:23.043 EAL: No shared files mode enabled, IPC is disabled 00:03:23.043 EAL: Heap on socket 0 was expanded by 34MB 00:03:23.043 EAL: Calling mem event callback 'spdk:(nil)' 00:03:23.043 EAL: request: mp_malloc_sync 00:03:23.043 EAL: No shared files mode enabled, IPC is disabled 00:03:23.043 EAL: Heap on socket 0 was shrunk by 34MB 00:03:23.043 EAL: Trying to obtain current memory policy. 00:03:23.043 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:23.043 EAL: Restoring previous memory policy: 4 00:03:23.043 EAL: Calling mem event callback 'spdk:(nil)' 00:03:23.043 EAL: request: mp_malloc_sync 00:03:23.043 EAL: No shared files mode enabled, IPC is disabled 00:03:23.043 EAL: Heap on socket 0 was expanded by 66MB 00:03:23.043 EAL: Calling mem event callback 'spdk:(nil)' 00:03:23.043 EAL: request: mp_malloc_sync 00:03:23.043 EAL: No shared files mode enabled, IPC is disabled 00:03:23.043 EAL: Heap on socket 0 was shrunk by 66MB 00:03:23.043 EAL: Trying to obtain current memory policy. 00:03:23.043 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:23.043 EAL: Restoring previous memory policy: 4 00:03:23.043 EAL: Calling mem event callback 'spdk:(nil)' 00:03:23.043 EAL: request: mp_malloc_sync 00:03:23.043 EAL: No shared files mode enabled, IPC is disabled 00:03:23.043 EAL: Heap on socket 0 was expanded by 130MB 00:03:23.043 EAL: Calling mem event callback 'spdk:(nil)' 00:03:23.043 EAL: request: mp_malloc_sync 00:03:23.043 EAL: No shared files mode enabled, IPC is disabled 00:03:23.043 EAL: Heap on socket 0 was shrunk by 130MB 00:03:23.043 EAL: Trying to obtain current memory policy. 00:03:23.043 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:23.043 EAL: Restoring previous memory policy: 4 00:03:23.043 EAL: Calling mem event callback 'spdk:(nil)' 00:03:23.043 EAL: request: mp_malloc_sync 00:03:23.043 EAL: No shared files mode enabled, IPC is disabled 00:03:23.043 EAL: Heap on socket 0 was expanded by 258MB 00:03:23.302 EAL: Calling mem event callback 'spdk:(nil)' 00:03:23.302 EAL: request: mp_malloc_sync 00:03:23.302 EAL: No shared files mode enabled, IPC is disabled 00:03:23.302 EAL: Heap on socket 0 was shrunk by 258MB 00:03:23.302 EAL: Trying to obtain current memory policy. 00:03:23.302 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:23.302 EAL: Restoring previous memory policy: 4 00:03:23.302 EAL: Calling mem event callback 'spdk:(nil)' 00:03:23.302 EAL: request: mp_malloc_sync 00:03:23.302 EAL: No shared files mode enabled, IPC is disabled 00:03:23.302 EAL: Heap on socket 0 was expanded by 514MB 00:03:23.302 EAL: Calling mem event callback 'spdk:(nil)' 00:03:23.561 EAL: request: mp_malloc_sync 00:03:23.561 EAL: No shared files mode enabled, IPC is disabled 00:03:23.561 EAL: Heap on socket 0 was shrunk by 514MB 00:03:23.561 EAL: Trying to obtain current memory policy. 00:03:23.561 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:23.561 EAL: Restoring previous memory policy: 4 00:03:23.561 EAL: Calling mem event callback 'spdk:(nil)' 00:03:23.561 EAL: request: mp_malloc_sync 00:03:23.561 EAL: No shared files mode enabled, IPC is disabled 00:03:23.561 EAL: Heap on socket 0 was expanded by 1026MB 00:03:23.820 EAL: Calling mem event callback 'spdk:(nil)' 00:03:24.079 EAL: request: mp_malloc_sync 00:03:24.079 EAL: No shared files mode enabled, IPC is disabled 00:03:24.079 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:24.079 passed 00:03:24.079 00:03:24.079 Run Summary: Type Total Ran Passed Failed Inactive 00:03:24.079 suites 1 1 n/a 0 0 00:03:24.079 tests 2 2 2 0 0 00:03:24.080 asserts 497 497 497 0 n/a 00:03:24.080 00:03:24.080 Elapsed time = 0.978 seconds 00:03:24.080 EAL: Calling mem event callback 'spdk:(nil)' 00:03:24.080 EAL: request: mp_malloc_sync 00:03:24.080 EAL: No shared files mode enabled, IPC is disabled 00:03:24.080 EAL: Heap on socket 0 was shrunk by 2MB 00:03:24.080 EAL: No shared files mode enabled, IPC is disabled 00:03:24.080 EAL: No shared files mode enabled, IPC is disabled 00:03:24.080 EAL: No shared files mode enabled, IPC is disabled 00:03:24.080 00:03:24.080 real 0m1.110s 00:03:24.080 user 0m0.651s 00:03:24.080 sys 0m0.432s 00:03:24.080 06:59:28 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:24.080 06:59:28 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:24.080 ************************************ 00:03:24.080 END TEST env_vtophys 00:03:24.080 ************************************ 00:03:24.080 06:59:28 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:24.080 06:59:28 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:24.080 06:59:28 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:24.080 06:59:28 env -- common/autotest_common.sh@10 -- # set +x 00:03:24.080 ************************************ 00:03:24.080 START TEST env_pci 00:03:24.080 ************************************ 00:03:24.080 06:59:28 env.env_pci -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:24.080 00:03:24.080 00:03:24.080 CUnit - A unit testing framework for C - Version 2.1-3 00:03:24.080 http://cunit.sourceforge.net/ 00:03:24.080 00:03:24.080 00:03:24.080 Suite: pci 00:03:24.080 Test: pci_hook ...[2024-11-20 06:59:28.516090] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 992383 has claimed it 00:03:24.080 EAL: Cannot find device (10000:00:01.0) 00:03:24.080 EAL: Failed to attach device on primary process 00:03:24.080 passed 00:03:24.080 00:03:24.080 Run Summary: Type Total Ran Passed Failed Inactive 00:03:24.080 suites 1 1 n/a 0 0 00:03:24.080 tests 1 1 1 0 0 00:03:24.080 asserts 25 25 25 0 n/a 00:03:24.080 00:03:24.080 Elapsed time = 0.026 seconds 00:03:24.080 00:03:24.080 real 0m0.046s 00:03:24.080 user 0m0.016s 00:03:24.080 sys 0m0.030s 00:03:24.080 06:59:28 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:24.080 06:59:28 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:24.080 ************************************ 00:03:24.080 END TEST env_pci 00:03:24.080 ************************************ 00:03:24.080 06:59:28 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:24.080 06:59:28 env -- env/env.sh@15 -- # uname 00:03:24.080 06:59:28 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:24.080 06:59:28 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:24.080 06:59:28 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:24.080 06:59:28 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:03:24.080 06:59:28 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:24.080 06:59:28 env -- common/autotest_common.sh@10 -- # set +x 00:03:24.080 ************************************ 00:03:24.080 START TEST env_dpdk_post_init 00:03:24.080 ************************************ 00:03:24.080 06:59:28 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:24.340 EAL: Detected CPU lcores: 96 00:03:24.340 EAL: Detected NUMA nodes: 2 00:03:24.340 EAL: Detected shared linkage of DPDK 00:03:24.340 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:24.340 EAL: Selected IOVA mode 'VA' 00:03:24.340 EAL: VFIO support initialized 00:03:24.340 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:24.340 EAL: Using IOMMU type 1 (Type 1) 00:03:24.340 EAL: Ignore mapping IO port bar(1) 00:03:24.340 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:03:24.340 EAL: Ignore mapping IO port bar(1) 00:03:24.340 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:03:24.340 EAL: Ignore mapping IO port bar(1) 00:03:24.340 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:03:24.340 EAL: Ignore mapping IO port bar(1) 00:03:24.340 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:03:24.340 EAL: Ignore mapping IO port bar(1) 00:03:24.340 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:03:24.340 EAL: Ignore mapping IO port bar(1) 00:03:24.340 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:03:24.340 EAL: Ignore mapping IO port bar(1) 00:03:24.340 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:03:24.340 EAL: Ignore mapping IO port bar(1) 00:03:24.340 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:03:25.278 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:03:25.278 EAL: Ignore mapping IO port bar(1) 00:03:25.278 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:03:25.278 EAL: Ignore mapping IO port bar(1) 00:03:25.278 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:03:25.278 EAL: Ignore mapping IO port bar(1) 00:03:25.278 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:03:25.278 EAL: Ignore mapping IO port bar(1) 00:03:25.278 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:03:25.278 EAL: Ignore mapping IO port bar(1) 00:03:25.278 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:03:25.278 EAL: Ignore mapping IO port bar(1) 00:03:25.278 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:03:25.278 EAL: Ignore mapping IO port bar(1) 00:03:25.278 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:03:25.278 EAL: Ignore mapping IO port bar(1) 00:03:25.278 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:03:28.563 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:03:28.563 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:03:28.563 Starting DPDK initialization... 00:03:28.563 Starting SPDK post initialization... 00:03:28.563 SPDK NVMe probe 00:03:28.563 Attaching to 0000:5e:00.0 00:03:28.563 Attached to 0000:5e:00.0 00:03:28.563 Cleaning up... 00:03:28.563 00:03:28.563 real 0m4.359s 00:03:28.563 user 0m2.994s 00:03:28.563 sys 0m0.444s 00:03:28.563 06:59:32 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:28.563 06:59:32 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:28.563 ************************************ 00:03:28.563 END TEST env_dpdk_post_init 00:03:28.563 ************************************ 00:03:28.563 06:59:33 env -- env/env.sh@26 -- # uname 00:03:28.563 06:59:33 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:28.563 06:59:33 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:28.563 06:59:33 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:28.563 06:59:33 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:28.563 06:59:33 env -- common/autotest_common.sh@10 -- # set +x 00:03:28.563 ************************************ 00:03:28.563 START TEST env_mem_callbacks 00:03:28.563 ************************************ 00:03:28.563 06:59:33 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:28.563 EAL: Detected CPU lcores: 96 00:03:28.563 EAL: Detected NUMA nodes: 2 00:03:28.563 EAL: Detected shared linkage of DPDK 00:03:28.563 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:28.563 EAL: Selected IOVA mode 'VA' 00:03:28.563 EAL: VFIO support initialized 00:03:28.563 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:28.563 00:03:28.563 00:03:28.563 CUnit - A unit testing framework for C - Version 2.1-3 00:03:28.563 http://cunit.sourceforge.net/ 00:03:28.563 00:03:28.563 00:03:28.563 Suite: memory 00:03:28.563 Test: test ... 00:03:28.563 register 0x200000200000 2097152 00:03:28.563 malloc 3145728 00:03:28.563 register 0x200000400000 4194304 00:03:28.563 buf 0x200000500000 len 3145728 PASSED 00:03:28.563 malloc 64 00:03:28.563 buf 0x2000004fff40 len 64 PASSED 00:03:28.563 malloc 4194304 00:03:28.563 register 0x200000800000 6291456 00:03:28.563 buf 0x200000a00000 len 4194304 PASSED 00:03:28.563 free 0x200000500000 3145728 00:03:28.563 free 0x2000004fff40 64 00:03:28.563 unregister 0x200000400000 4194304 PASSED 00:03:28.563 free 0x200000a00000 4194304 00:03:28.563 unregister 0x200000800000 6291456 PASSED 00:03:28.563 malloc 8388608 00:03:28.563 register 0x200000400000 10485760 00:03:28.563 buf 0x200000600000 len 8388608 PASSED 00:03:28.563 free 0x200000600000 8388608 00:03:28.563 unregister 0x200000400000 10485760 PASSED 00:03:28.563 passed 00:03:28.563 00:03:28.563 Run Summary: Type Total Ran Passed Failed Inactive 00:03:28.563 suites 1 1 n/a 0 0 00:03:28.563 tests 1 1 1 0 0 00:03:28.563 asserts 15 15 15 0 n/a 00:03:28.563 00:03:28.563 Elapsed time = 0.008 seconds 00:03:28.563 00:03:28.563 real 0m0.055s 00:03:28.563 user 0m0.017s 00:03:28.563 sys 0m0.038s 00:03:28.563 06:59:33 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:28.563 06:59:33 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:28.563 ************************************ 00:03:28.563 END TEST env_mem_callbacks 00:03:28.563 ************************************ 00:03:28.822 00:03:28.822 real 0m6.261s 00:03:28.822 user 0m4.074s 00:03:28.822 sys 0m1.274s 00:03:28.822 06:59:33 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:28.822 06:59:33 env -- common/autotest_common.sh@10 -- # set +x 00:03:28.822 ************************************ 00:03:28.822 END TEST env 00:03:28.822 ************************************ 00:03:28.822 06:59:33 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:28.822 06:59:33 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:28.822 06:59:33 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:28.822 06:59:33 -- common/autotest_common.sh@10 -- # set +x 00:03:28.822 ************************************ 00:03:28.822 START TEST rpc 00:03:28.822 ************************************ 00:03:28.822 06:59:33 rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:28.822 * Looking for test storage... 00:03:28.822 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:28.822 06:59:33 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:28.822 06:59:33 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:28.822 06:59:33 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:28.822 06:59:33 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:28.822 06:59:33 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:28.822 06:59:33 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:28.823 06:59:33 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:28.823 06:59:33 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:28.823 06:59:33 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:28.823 06:59:33 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:28.823 06:59:33 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:28.823 06:59:33 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:28.823 06:59:33 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:28.823 06:59:33 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:28.823 06:59:33 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:29.083 06:59:33 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:29.083 06:59:33 rpc -- scripts/common.sh@345 -- # : 1 00:03:29.083 06:59:33 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:29.083 06:59:33 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:29.083 06:59:33 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:29.083 06:59:33 rpc -- scripts/common.sh@353 -- # local d=1 00:03:29.083 06:59:33 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:29.083 06:59:33 rpc -- scripts/common.sh@355 -- # echo 1 00:03:29.083 06:59:33 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:29.083 06:59:33 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:29.083 06:59:33 rpc -- scripts/common.sh@353 -- # local d=2 00:03:29.083 06:59:33 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:29.083 06:59:33 rpc -- scripts/common.sh@355 -- # echo 2 00:03:29.083 06:59:33 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:29.083 06:59:33 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:29.083 06:59:33 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:29.083 06:59:33 rpc -- scripts/common.sh@368 -- # return 0 00:03:29.083 06:59:33 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:29.083 06:59:33 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:29.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.083 --rc genhtml_branch_coverage=1 00:03:29.083 --rc genhtml_function_coverage=1 00:03:29.083 --rc genhtml_legend=1 00:03:29.083 --rc geninfo_all_blocks=1 00:03:29.083 --rc geninfo_unexecuted_blocks=1 00:03:29.083 00:03:29.083 ' 00:03:29.083 06:59:33 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:29.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.083 --rc genhtml_branch_coverage=1 00:03:29.083 --rc genhtml_function_coverage=1 00:03:29.083 --rc genhtml_legend=1 00:03:29.083 --rc geninfo_all_blocks=1 00:03:29.083 --rc geninfo_unexecuted_blocks=1 00:03:29.083 00:03:29.083 ' 00:03:29.083 06:59:33 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:29.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.083 --rc genhtml_branch_coverage=1 00:03:29.083 --rc genhtml_function_coverage=1 00:03:29.083 --rc genhtml_legend=1 00:03:29.083 --rc geninfo_all_blocks=1 00:03:29.083 --rc geninfo_unexecuted_blocks=1 00:03:29.083 00:03:29.083 ' 00:03:29.083 06:59:33 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:29.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.083 --rc genhtml_branch_coverage=1 00:03:29.083 --rc genhtml_function_coverage=1 00:03:29.083 --rc genhtml_legend=1 00:03:29.083 --rc geninfo_all_blocks=1 00:03:29.083 --rc geninfo_unexecuted_blocks=1 00:03:29.083 00:03:29.083 ' 00:03:29.083 06:59:33 rpc -- rpc/rpc.sh@65 -- # spdk_pid=993236 00:03:29.083 06:59:33 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:29.083 06:59:33 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:29.083 06:59:33 rpc -- rpc/rpc.sh@67 -- # waitforlisten 993236 00:03:29.083 06:59:33 rpc -- common/autotest_common.sh@833 -- # '[' -z 993236 ']' 00:03:29.083 06:59:33 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:29.083 06:59:33 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:29.083 06:59:33 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:29.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:29.083 06:59:33 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:29.083 06:59:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:29.083 [2024-11-20 06:59:33.441107] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:03:29.083 [2024-11-20 06:59:33.441155] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid993236 ] 00:03:29.083 [2024-11-20 06:59:33.515165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:29.083 [2024-11-20 06:59:33.554210] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:29.083 [2024-11-20 06:59:33.554249] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 993236' to capture a snapshot of events at runtime. 00:03:29.083 [2024-11-20 06:59:33.554256] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:29.083 [2024-11-20 06:59:33.554264] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:29.083 [2024-11-20 06:59:33.554269] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid993236 for offline analysis/debug. 00:03:29.083 [2024-11-20 06:59:33.554829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:30.021 06:59:34 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:30.021 06:59:34 rpc -- common/autotest_common.sh@866 -- # return 0 00:03:30.021 06:59:34 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:30.021 06:59:34 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:30.021 06:59:34 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:30.021 06:59:34 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:30.021 06:59:34 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:30.021 06:59:34 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:30.021 06:59:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:30.021 ************************************ 00:03:30.021 START TEST rpc_integrity 00:03:30.021 ************************************ 00:03:30.021 06:59:34 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:03:30.021 06:59:34 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:30.021 06:59:34 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:30.021 06:59:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:30.021 06:59:34 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:30.021 06:59:34 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:30.021 06:59:34 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:30.021 06:59:34 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:30.021 06:59:34 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:30.021 06:59:34 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:30.021 06:59:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:30.021 06:59:34 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:30.021 06:59:34 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:30.021 06:59:34 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:30.021 06:59:34 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:30.021 06:59:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:30.021 06:59:34 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:30.021 06:59:34 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:30.021 { 00:03:30.021 "name": "Malloc0", 00:03:30.021 "aliases": [ 00:03:30.021 "51511735-e565-42d3-82dc-42ad3746aa56" 00:03:30.021 ], 00:03:30.021 "product_name": "Malloc disk", 00:03:30.021 "block_size": 512, 00:03:30.021 "num_blocks": 16384, 00:03:30.021 "uuid": "51511735-e565-42d3-82dc-42ad3746aa56", 00:03:30.021 "assigned_rate_limits": { 00:03:30.021 "rw_ios_per_sec": 0, 00:03:30.021 "rw_mbytes_per_sec": 0, 00:03:30.021 "r_mbytes_per_sec": 0, 00:03:30.021 "w_mbytes_per_sec": 0 00:03:30.021 }, 00:03:30.021 "claimed": false, 00:03:30.021 "zoned": false, 00:03:30.021 "supported_io_types": { 00:03:30.021 "read": true, 00:03:30.021 "write": true, 00:03:30.021 "unmap": true, 00:03:30.021 "flush": true, 00:03:30.021 "reset": true, 00:03:30.021 "nvme_admin": false, 00:03:30.021 "nvme_io": false, 00:03:30.021 "nvme_io_md": false, 00:03:30.021 "write_zeroes": true, 00:03:30.021 "zcopy": true, 00:03:30.021 "get_zone_info": false, 00:03:30.021 "zone_management": false, 00:03:30.021 "zone_append": false, 00:03:30.021 "compare": false, 00:03:30.021 "compare_and_write": false, 00:03:30.021 "abort": true, 00:03:30.021 "seek_hole": false, 00:03:30.021 "seek_data": false, 00:03:30.021 "copy": true, 00:03:30.021 "nvme_iov_md": false 00:03:30.021 }, 00:03:30.021 "memory_domains": [ 00:03:30.021 { 00:03:30.021 "dma_device_id": "system", 00:03:30.021 "dma_device_type": 1 00:03:30.021 }, 00:03:30.021 { 00:03:30.021 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:30.021 "dma_device_type": 2 00:03:30.021 } 00:03:30.021 ], 00:03:30.021 "driver_specific": {} 00:03:30.021 } 00:03:30.021 ]' 00:03:30.021 06:59:34 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:30.021 06:59:34 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:30.021 06:59:34 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:30.021 06:59:34 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:30.021 06:59:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:30.022 [2024-11-20 06:59:34.430773] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:30.022 [2024-11-20 06:59:34.430803] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:30.022 [2024-11-20 06:59:34.430816] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb52270 00:03:30.022 [2024-11-20 06:59:34.430822] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:30.022 [2024-11-20 06:59:34.431929] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:30.022 [2024-11-20 06:59:34.431957] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:30.022 Passthru0 00:03:30.022 06:59:34 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:30.022 06:59:34 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:30.022 06:59:34 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:30.022 06:59:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:30.022 06:59:34 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:30.022 06:59:34 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:30.022 { 00:03:30.022 "name": "Malloc0", 00:03:30.022 "aliases": [ 00:03:30.022 "51511735-e565-42d3-82dc-42ad3746aa56" 00:03:30.022 ], 00:03:30.022 "product_name": "Malloc disk", 00:03:30.022 "block_size": 512, 00:03:30.022 "num_blocks": 16384, 00:03:30.022 "uuid": "51511735-e565-42d3-82dc-42ad3746aa56", 00:03:30.022 "assigned_rate_limits": { 00:03:30.022 "rw_ios_per_sec": 0, 00:03:30.022 "rw_mbytes_per_sec": 0, 00:03:30.022 "r_mbytes_per_sec": 0, 00:03:30.022 "w_mbytes_per_sec": 0 00:03:30.022 }, 00:03:30.022 "claimed": true, 00:03:30.022 "claim_type": "exclusive_write", 00:03:30.022 "zoned": false, 00:03:30.022 "supported_io_types": { 00:03:30.022 "read": true, 00:03:30.022 "write": true, 00:03:30.022 "unmap": true, 00:03:30.022 "flush": true, 00:03:30.022 "reset": true, 00:03:30.022 "nvme_admin": false, 00:03:30.022 "nvme_io": false, 00:03:30.022 "nvme_io_md": false, 00:03:30.022 "write_zeroes": true, 00:03:30.022 "zcopy": true, 00:03:30.022 "get_zone_info": false, 00:03:30.022 "zone_management": false, 00:03:30.022 "zone_append": false, 00:03:30.022 "compare": false, 00:03:30.022 "compare_and_write": false, 00:03:30.022 "abort": true, 00:03:30.022 "seek_hole": false, 00:03:30.022 "seek_data": false, 00:03:30.022 "copy": true, 00:03:30.022 "nvme_iov_md": false 00:03:30.022 }, 00:03:30.022 "memory_domains": [ 00:03:30.022 { 00:03:30.022 "dma_device_id": "system", 00:03:30.022 "dma_device_type": 1 00:03:30.022 }, 00:03:30.022 { 00:03:30.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:30.022 "dma_device_type": 2 00:03:30.022 } 00:03:30.022 ], 00:03:30.022 "driver_specific": {} 00:03:30.022 }, 00:03:30.022 { 00:03:30.022 "name": "Passthru0", 00:03:30.022 "aliases": [ 00:03:30.022 "6b740eae-876d-5463-bb13-2afb0eef9876" 00:03:30.022 ], 00:03:30.022 "product_name": "passthru", 00:03:30.022 "block_size": 512, 00:03:30.022 "num_blocks": 16384, 00:03:30.022 "uuid": "6b740eae-876d-5463-bb13-2afb0eef9876", 00:03:30.022 "assigned_rate_limits": { 00:03:30.022 "rw_ios_per_sec": 0, 00:03:30.022 "rw_mbytes_per_sec": 0, 00:03:30.022 "r_mbytes_per_sec": 0, 00:03:30.022 "w_mbytes_per_sec": 0 00:03:30.022 }, 00:03:30.022 "claimed": false, 00:03:30.022 "zoned": false, 00:03:30.022 "supported_io_types": { 00:03:30.022 "read": true, 00:03:30.022 "write": true, 00:03:30.022 "unmap": true, 00:03:30.022 "flush": true, 00:03:30.022 "reset": true, 00:03:30.022 "nvme_admin": false, 00:03:30.022 "nvme_io": false, 00:03:30.022 "nvme_io_md": false, 00:03:30.022 "write_zeroes": true, 00:03:30.022 "zcopy": true, 00:03:30.022 "get_zone_info": false, 00:03:30.022 "zone_management": false, 00:03:30.022 "zone_append": false, 00:03:30.022 "compare": false, 00:03:30.022 "compare_and_write": false, 00:03:30.022 "abort": true, 00:03:30.022 "seek_hole": false, 00:03:30.022 "seek_data": false, 00:03:30.022 "copy": true, 00:03:30.022 "nvme_iov_md": false 00:03:30.022 }, 00:03:30.022 "memory_domains": [ 00:03:30.022 { 00:03:30.022 "dma_device_id": "system", 00:03:30.022 "dma_device_type": 1 00:03:30.022 }, 00:03:30.022 { 00:03:30.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:30.022 "dma_device_type": 2 00:03:30.022 } 00:03:30.022 ], 00:03:30.022 "driver_specific": { 00:03:30.022 "passthru": { 00:03:30.022 "name": "Passthru0", 00:03:30.022 "base_bdev_name": "Malloc0" 00:03:30.022 } 00:03:30.022 } 00:03:30.022 } 00:03:30.022 ]' 00:03:30.022 06:59:34 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:30.022 06:59:34 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:30.022 06:59:34 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:30.022 06:59:34 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:30.022 06:59:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:30.022 06:59:34 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:30.022 06:59:34 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:30.022 06:59:34 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:30.022 06:59:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:30.022 06:59:34 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:30.022 06:59:34 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:30.022 06:59:34 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:30.022 06:59:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:30.022 06:59:34 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:30.022 06:59:34 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:30.022 06:59:34 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:30.022 06:59:34 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:30.022 00:03:30.022 real 0m0.269s 00:03:30.022 user 0m0.161s 00:03:30.022 sys 0m0.043s 00:03:30.022 06:59:34 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:30.022 06:59:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:30.022 ************************************ 00:03:30.022 END TEST rpc_integrity 00:03:30.022 ************************************ 00:03:30.282 06:59:34 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:30.282 06:59:34 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:30.282 06:59:34 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:30.282 06:59:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:30.282 ************************************ 00:03:30.282 START TEST rpc_plugins 00:03:30.282 ************************************ 00:03:30.282 06:59:34 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:03:30.282 06:59:34 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:30.282 06:59:34 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:30.282 06:59:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:30.282 06:59:34 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:30.282 06:59:34 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:30.282 06:59:34 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:30.282 06:59:34 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:30.282 06:59:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:30.282 06:59:34 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:30.282 06:59:34 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:30.282 { 00:03:30.282 "name": "Malloc1", 00:03:30.282 "aliases": [ 00:03:30.282 "a5aa5ed7-a865-4a47-85c4-fd466634d7c1" 00:03:30.282 ], 00:03:30.282 "product_name": "Malloc disk", 00:03:30.282 "block_size": 4096, 00:03:30.282 "num_blocks": 256, 00:03:30.282 "uuid": "a5aa5ed7-a865-4a47-85c4-fd466634d7c1", 00:03:30.282 "assigned_rate_limits": { 00:03:30.282 "rw_ios_per_sec": 0, 00:03:30.282 "rw_mbytes_per_sec": 0, 00:03:30.282 "r_mbytes_per_sec": 0, 00:03:30.282 "w_mbytes_per_sec": 0 00:03:30.282 }, 00:03:30.282 "claimed": false, 00:03:30.282 "zoned": false, 00:03:30.282 "supported_io_types": { 00:03:30.282 "read": true, 00:03:30.282 "write": true, 00:03:30.282 "unmap": true, 00:03:30.282 "flush": true, 00:03:30.282 "reset": true, 00:03:30.282 "nvme_admin": false, 00:03:30.282 "nvme_io": false, 00:03:30.282 "nvme_io_md": false, 00:03:30.282 "write_zeroes": true, 00:03:30.282 "zcopy": true, 00:03:30.282 "get_zone_info": false, 00:03:30.282 "zone_management": false, 00:03:30.282 "zone_append": false, 00:03:30.282 "compare": false, 00:03:30.282 "compare_and_write": false, 00:03:30.282 "abort": true, 00:03:30.282 "seek_hole": false, 00:03:30.282 "seek_data": false, 00:03:30.282 "copy": true, 00:03:30.282 "nvme_iov_md": false 00:03:30.282 }, 00:03:30.282 "memory_domains": [ 00:03:30.282 { 00:03:30.282 "dma_device_id": "system", 00:03:30.282 "dma_device_type": 1 00:03:30.282 }, 00:03:30.282 { 00:03:30.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:30.282 "dma_device_type": 2 00:03:30.282 } 00:03:30.282 ], 00:03:30.282 "driver_specific": {} 00:03:30.282 } 00:03:30.282 ]' 00:03:30.282 06:59:34 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:30.282 06:59:34 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:30.282 06:59:34 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:30.282 06:59:34 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:30.282 06:59:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:30.282 06:59:34 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:30.282 06:59:34 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:30.282 06:59:34 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:30.282 06:59:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:30.282 06:59:34 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:30.282 06:59:34 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:30.282 06:59:34 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:30.282 06:59:34 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:30.282 00:03:30.282 real 0m0.140s 00:03:30.282 user 0m0.091s 00:03:30.282 sys 0m0.015s 00:03:30.282 06:59:34 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:30.282 06:59:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:30.282 ************************************ 00:03:30.282 END TEST rpc_plugins 00:03:30.282 ************************************ 00:03:30.282 06:59:34 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:30.282 06:59:34 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:30.282 06:59:34 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:30.282 06:59:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:30.542 ************************************ 00:03:30.542 START TEST rpc_trace_cmd_test 00:03:30.542 ************************************ 00:03:30.542 06:59:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:03:30.542 06:59:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:30.542 06:59:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:30.542 06:59:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:30.542 06:59:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:30.542 06:59:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:30.542 06:59:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:30.542 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid993236", 00:03:30.542 "tpoint_group_mask": "0x8", 00:03:30.542 "iscsi_conn": { 00:03:30.542 "mask": "0x2", 00:03:30.542 "tpoint_mask": "0x0" 00:03:30.542 }, 00:03:30.542 "scsi": { 00:03:30.542 "mask": "0x4", 00:03:30.542 "tpoint_mask": "0x0" 00:03:30.542 }, 00:03:30.542 "bdev": { 00:03:30.542 "mask": "0x8", 00:03:30.542 "tpoint_mask": "0xffffffffffffffff" 00:03:30.542 }, 00:03:30.542 "nvmf_rdma": { 00:03:30.542 "mask": "0x10", 00:03:30.542 "tpoint_mask": "0x0" 00:03:30.542 }, 00:03:30.542 "nvmf_tcp": { 00:03:30.542 "mask": "0x20", 00:03:30.542 "tpoint_mask": "0x0" 00:03:30.542 }, 00:03:30.542 "ftl": { 00:03:30.542 "mask": "0x40", 00:03:30.542 "tpoint_mask": "0x0" 00:03:30.542 }, 00:03:30.542 "blobfs": { 00:03:30.542 "mask": "0x80", 00:03:30.542 "tpoint_mask": "0x0" 00:03:30.542 }, 00:03:30.542 "dsa": { 00:03:30.542 "mask": "0x200", 00:03:30.542 "tpoint_mask": "0x0" 00:03:30.542 }, 00:03:30.542 "thread": { 00:03:30.542 "mask": "0x400", 00:03:30.542 "tpoint_mask": "0x0" 00:03:30.542 }, 00:03:30.542 "nvme_pcie": { 00:03:30.542 "mask": "0x800", 00:03:30.542 "tpoint_mask": "0x0" 00:03:30.542 }, 00:03:30.542 "iaa": { 00:03:30.542 "mask": "0x1000", 00:03:30.542 "tpoint_mask": "0x0" 00:03:30.542 }, 00:03:30.542 "nvme_tcp": { 00:03:30.542 "mask": "0x2000", 00:03:30.542 "tpoint_mask": "0x0" 00:03:30.542 }, 00:03:30.542 "bdev_nvme": { 00:03:30.542 "mask": "0x4000", 00:03:30.542 "tpoint_mask": "0x0" 00:03:30.542 }, 00:03:30.542 "sock": { 00:03:30.542 "mask": "0x8000", 00:03:30.542 "tpoint_mask": "0x0" 00:03:30.542 }, 00:03:30.542 "blob": { 00:03:30.542 "mask": "0x10000", 00:03:30.542 "tpoint_mask": "0x0" 00:03:30.542 }, 00:03:30.542 "bdev_raid": { 00:03:30.542 "mask": "0x20000", 00:03:30.542 "tpoint_mask": "0x0" 00:03:30.542 }, 00:03:30.542 "scheduler": { 00:03:30.542 "mask": "0x40000", 00:03:30.542 "tpoint_mask": "0x0" 00:03:30.542 } 00:03:30.542 }' 00:03:30.542 06:59:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:30.542 06:59:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:30.542 06:59:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:30.542 06:59:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:30.542 06:59:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:30.542 06:59:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:30.542 06:59:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:30.542 06:59:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:30.542 06:59:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:30.542 06:59:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:30.542 00:03:30.542 real 0m0.214s 00:03:30.542 user 0m0.177s 00:03:30.542 sys 0m0.029s 00:03:30.542 06:59:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:30.542 06:59:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:30.542 ************************************ 00:03:30.542 END TEST rpc_trace_cmd_test 00:03:30.542 ************************************ 00:03:30.802 06:59:35 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:30.802 06:59:35 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:30.802 06:59:35 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:30.802 06:59:35 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:30.802 06:59:35 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:30.802 06:59:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:30.802 ************************************ 00:03:30.802 START TEST rpc_daemon_integrity 00:03:30.802 ************************************ 00:03:30.802 06:59:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:03:30.802 06:59:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:30.802 06:59:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:30.802 06:59:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:30.802 06:59:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:30.802 06:59:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:30.802 06:59:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:30.802 06:59:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:30.802 06:59:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:30.802 06:59:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:30.802 06:59:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:30.802 06:59:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:30.802 06:59:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:30.802 06:59:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:30.802 06:59:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:30.802 06:59:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:30.802 06:59:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:30.802 06:59:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:30.802 { 00:03:30.802 "name": "Malloc2", 00:03:30.802 "aliases": [ 00:03:30.802 "d08442dd-94e1-481b-a9a7-033259ba7a7f" 00:03:30.802 ], 00:03:30.802 "product_name": "Malloc disk", 00:03:30.802 "block_size": 512, 00:03:30.802 "num_blocks": 16384, 00:03:30.802 "uuid": "d08442dd-94e1-481b-a9a7-033259ba7a7f", 00:03:30.802 "assigned_rate_limits": { 00:03:30.802 "rw_ios_per_sec": 0, 00:03:30.802 "rw_mbytes_per_sec": 0, 00:03:30.802 "r_mbytes_per_sec": 0, 00:03:30.802 "w_mbytes_per_sec": 0 00:03:30.802 }, 00:03:30.802 "claimed": false, 00:03:30.802 "zoned": false, 00:03:30.802 "supported_io_types": { 00:03:30.802 "read": true, 00:03:30.802 "write": true, 00:03:30.802 "unmap": true, 00:03:30.802 "flush": true, 00:03:30.802 "reset": true, 00:03:30.802 "nvme_admin": false, 00:03:30.802 "nvme_io": false, 00:03:30.802 "nvme_io_md": false, 00:03:30.802 "write_zeroes": true, 00:03:30.802 "zcopy": true, 00:03:30.802 "get_zone_info": false, 00:03:30.802 "zone_management": false, 00:03:30.802 "zone_append": false, 00:03:30.802 "compare": false, 00:03:30.802 "compare_and_write": false, 00:03:30.802 "abort": true, 00:03:30.802 "seek_hole": false, 00:03:30.802 "seek_data": false, 00:03:30.802 "copy": true, 00:03:30.802 "nvme_iov_md": false 00:03:30.802 }, 00:03:30.803 "memory_domains": [ 00:03:30.803 { 00:03:30.803 "dma_device_id": "system", 00:03:30.803 "dma_device_type": 1 00:03:30.803 }, 00:03:30.803 { 00:03:30.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:30.803 "dma_device_type": 2 00:03:30.803 } 00:03:30.803 ], 00:03:30.803 "driver_specific": {} 00:03:30.803 } 00:03:30.803 ]' 00:03:30.803 06:59:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:30.803 06:59:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:30.803 06:59:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:30.803 06:59:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:30.803 06:59:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:30.803 [2024-11-20 06:59:35.261049] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:30.803 [2024-11-20 06:59:35.261079] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:30.803 [2024-11-20 06:59:35.261092] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc95e00 00:03:30.803 [2024-11-20 06:59:35.261099] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:30.803 [2024-11-20 06:59:35.262104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:30.803 [2024-11-20 06:59:35.262124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:30.803 Passthru0 00:03:30.803 06:59:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:30.803 06:59:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:30.803 06:59:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:30.803 06:59:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:30.803 06:59:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:30.803 06:59:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:30.803 { 00:03:30.803 "name": "Malloc2", 00:03:30.803 "aliases": [ 00:03:30.803 "d08442dd-94e1-481b-a9a7-033259ba7a7f" 00:03:30.803 ], 00:03:30.803 "product_name": "Malloc disk", 00:03:30.803 "block_size": 512, 00:03:30.803 "num_blocks": 16384, 00:03:30.803 "uuid": "d08442dd-94e1-481b-a9a7-033259ba7a7f", 00:03:30.803 "assigned_rate_limits": { 00:03:30.803 "rw_ios_per_sec": 0, 00:03:30.803 "rw_mbytes_per_sec": 0, 00:03:30.803 "r_mbytes_per_sec": 0, 00:03:30.803 "w_mbytes_per_sec": 0 00:03:30.803 }, 00:03:30.803 "claimed": true, 00:03:30.803 "claim_type": "exclusive_write", 00:03:30.803 "zoned": false, 00:03:30.803 "supported_io_types": { 00:03:30.803 "read": true, 00:03:30.803 "write": true, 00:03:30.803 "unmap": true, 00:03:30.803 "flush": true, 00:03:30.803 "reset": true, 00:03:30.803 "nvme_admin": false, 00:03:30.803 "nvme_io": false, 00:03:30.803 "nvme_io_md": false, 00:03:30.803 "write_zeroes": true, 00:03:30.803 "zcopy": true, 00:03:30.803 "get_zone_info": false, 00:03:30.803 "zone_management": false, 00:03:30.803 "zone_append": false, 00:03:30.803 "compare": false, 00:03:30.803 "compare_and_write": false, 00:03:30.803 "abort": true, 00:03:30.803 "seek_hole": false, 00:03:30.803 "seek_data": false, 00:03:30.803 "copy": true, 00:03:30.803 "nvme_iov_md": false 00:03:30.803 }, 00:03:30.803 "memory_domains": [ 00:03:30.803 { 00:03:30.803 "dma_device_id": "system", 00:03:30.803 "dma_device_type": 1 00:03:30.803 }, 00:03:30.803 { 00:03:30.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:30.803 "dma_device_type": 2 00:03:30.803 } 00:03:30.803 ], 00:03:30.803 "driver_specific": {} 00:03:30.803 }, 00:03:30.803 { 00:03:30.803 "name": "Passthru0", 00:03:30.803 "aliases": [ 00:03:30.803 "89fa2d76-93f1-58f1-aa63-adbe7288e28e" 00:03:30.803 ], 00:03:30.803 "product_name": "passthru", 00:03:30.803 "block_size": 512, 00:03:30.803 "num_blocks": 16384, 00:03:30.803 "uuid": "89fa2d76-93f1-58f1-aa63-adbe7288e28e", 00:03:30.803 "assigned_rate_limits": { 00:03:30.803 "rw_ios_per_sec": 0, 00:03:30.803 "rw_mbytes_per_sec": 0, 00:03:30.803 "r_mbytes_per_sec": 0, 00:03:30.803 "w_mbytes_per_sec": 0 00:03:30.803 }, 00:03:30.803 "claimed": false, 00:03:30.803 "zoned": false, 00:03:30.803 "supported_io_types": { 00:03:30.803 "read": true, 00:03:30.803 "write": true, 00:03:30.803 "unmap": true, 00:03:30.803 "flush": true, 00:03:30.803 "reset": true, 00:03:30.803 "nvme_admin": false, 00:03:30.803 "nvme_io": false, 00:03:30.803 "nvme_io_md": false, 00:03:30.803 "write_zeroes": true, 00:03:30.803 "zcopy": true, 00:03:30.803 "get_zone_info": false, 00:03:30.803 "zone_management": false, 00:03:30.803 "zone_append": false, 00:03:30.803 "compare": false, 00:03:30.803 "compare_and_write": false, 00:03:30.803 "abort": true, 00:03:30.803 "seek_hole": false, 00:03:30.803 "seek_data": false, 00:03:30.803 "copy": true, 00:03:30.803 "nvme_iov_md": false 00:03:30.803 }, 00:03:30.803 "memory_domains": [ 00:03:30.803 { 00:03:30.803 "dma_device_id": "system", 00:03:30.803 "dma_device_type": 1 00:03:30.803 }, 00:03:30.803 { 00:03:30.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:30.803 "dma_device_type": 2 00:03:30.803 } 00:03:30.803 ], 00:03:30.803 "driver_specific": { 00:03:30.803 "passthru": { 00:03:30.803 "name": "Passthru0", 00:03:30.803 "base_bdev_name": "Malloc2" 00:03:30.803 } 00:03:30.803 } 00:03:30.803 } 00:03:30.803 ]' 00:03:30.803 06:59:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:30.803 06:59:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:30.803 06:59:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:30.803 06:59:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:30.803 06:59:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:30.803 06:59:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:30.803 06:59:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:30.803 06:59:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:30.803 06:59:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.063 06:59:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:31.063 06:59:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:31.063 06:59:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:31.063 06:59:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.063 06:59:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:31.063 06:59:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:31.063 06:59:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:31.063 06:59:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:31.063 00:03:31.063 real 0m0.281s 00:03:31.063 user 0m0.181s 00:03:31.063 sys 0m0.039s 00:03:31.064 06:59:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:31.064 06:59:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:31.064 ************************************ 00:03:31.064 END TEST rpc_daemon_integrity 00:03:31.064 ************************************ 00:03:31.064 06:59:35 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:31.064 06:59:35 rpc -- rpc/rpc.sh@84 -- # killprocess 993236 00:03:31.064 06:59:35 rpc -- common/autotest_common.sh@952 -- # '[' -z 993236 ']' 00:03:31.064 06:59:35 rpc -- common/autotest_common.sh@956 -- # kill -0 993236 00:03:31.064 06:59:35 rpc -- common/autotest_common.sh@957 -- # uname 00:03:31.064 06:59:35 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:31.064 06:59:35 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 993236 00:03:31.064 06:59:35 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:31.064 06:59:35 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:31.064 06:59:35 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 993236' 00:03:31.064 killing process with pid 993236 00:03:31.064 06:59:35 rpc -- common/autotest_common.sh@971 -- # kill 993236 00:03:31.064 06:59:35 rpc -- common/autotest_common.sh@976 -- # wait 993236 00:03:31.323 00:03:31.323 real 0m2.580s 00:03:31.323 user 0m3.273s 00:03:31.323 sys 0m0.745s 00:03:31.323 06:59:35 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:31.323 06:59:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:31.323 ************************************ 00:03:31.323 END TEST rpc 00:03:31.323 ************************************ 00:03:31.323 06:59:35 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:31.323 06:59:35 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:31.323 06:59:35 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:31.323 06:59:35 -- common/autotest_common.sh@10 -- # set +x 00:03:31.323 ************************************ 00:03:31.323 START TEST skip_rpc 00:03:31.323 ************************************ 00:03:31.323 06:59:35 skip_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:31.587 * Looking for test storage... 00:03:31.587 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:31.587 06:59:35 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:31.587 06:59:35 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:31.587 06:59:35 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:31.587 06:59:36 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:31.587 06:59:36 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:31.587 06:59:36 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:31.587 06:59:36 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:31.587 06:59:36 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:31.587 06:59:36 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:31.587 06:59:36 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:31.587 06:59:36 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:31.587 06:59:36 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:31.587 06:59:36 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:31.587 06:59:36 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:31.587 06:59:36 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:31.587 06:59:36 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:31.587 06:59:36 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:31.587 06:59:36 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:31.587 06:59:36 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:31.587 06:59:36 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:31.587 06:59:36 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:31.587 06:59:36 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:31.587 06:59:36 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:31.587 06:59:36 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:31.587 06:59:36 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:31.587 06:59:36 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:31.587 06:59:36 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:31.587 06:59:36 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:31.587 06:59:36 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:31.587 06:59:36 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:31.587 06:59:36 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:31.587 06:59:36 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:31.587 06:59:36 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:31.587 06:59:36 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:31.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.587 --rc genhtml_branch_coverage=1 00:03:31.587 --rc genhtml_function_coverage=1 00:03:31.587 --rc genhtml_legend=1 00:03:31.587 --rc geninfo_all_blocks=1 00:03:31.587 --rc geninfo_unexecuted_blocks=1 00:03:31.587 00:03:31.587 ' 00:03:31.587 06:59:36 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:31.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.587 --rc genhtml_branch_coverage=1 00:03:31.587 --rc genhtml_function_coverage=1 00:03:31.587 --rc genhtml_legend=1 00:03:31.587 --rc geninfo_all_blocks=1 00:03:31.587 --rc geninfo_unexecuted_blocks=1 00:03:31.587 00:03:31.587 ' 00:03:31.587 06:59:36 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:31.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.587 --rc genhtml_branch_coverage=1 00:03:31.587 --rc genhtml_function_coverage=1 00:03:31.587 --rc genhtml_legend=1 00:03:31.587 --rc geninfo_all_blocks=1 00:03:31.587 --rc geninfo_unexecuted_blocks=1 00:03:31.587 00:03:31.587 ' 00:03:31.587 06:59:36 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:31.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.587 --rc genhtml_branch_coverage=1 00:03:31.587 --rc genhtml_function_coverage=1 00:03:31.587 --rc genhtml_legend=1 00:03:31.587 --rc geninfo_all_blocks=1 00:03:31.587 --rc geninfo_unexecuted_blocks=1 00:03:31.587 00:03:31.587 ' 00:03:31.587 06:59:36 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:31.587 06:59:36 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:31.587 06:59:36 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:31.587 06:59:36 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:31.587 06:59:36 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:31.587 06:59:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:31.587 ************************************ 00:03:31.587 START TEST skip_rpc 00:03:31.587 ************************************ 00:03:31.587 06:59:36 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:03:31.587 06:59:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=993873 00:03:31.587 06:59:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:31.587 06:59:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:31.587 06:59:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:31.587 [2024-11-20 06:59:36.116501] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:03:31.587 [2024-11-20 06:59:36.116538] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid993873 ] 00:03:31.847 [2024-11-20 06:59:36.189756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:31.847 [2024-11-20 06:59:36.230240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:37.117 06:59:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:37.117 06:59:41 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:03:37.117 06:59:41 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:37.117 06:59:41 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:03:37.117 06:59:41 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:37.117 06:59:41 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:03:37.117 06:59:41 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:37.117 06:59:41 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:03:37.117 06:59:41 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:37.117 06:59:41 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:37.117 06:59:41 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:37.117 06:59:41 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:03:37.117 06:59:41 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:37.117 06:59:41 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:37.117 06:59:41 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:37.117 06:59:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:37.117 06:59:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 993873 00:03:37.117 06:59:41 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 993873 ']' 00:03:37.117 06:59:41 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 993873 00:03:37.117 06:59:41 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:03:37.117 06:59:41 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:37.117 06:59:41 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 993873 00:03:37.117 06:59:41 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:37.117 06:59:41 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:37.117 06:59:41 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 993873' 00:03:37.117 killing process with pid 993873 00:03:37.117 06:59:41 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 993873 00:03:37.117 06:59:41 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 993873 00:03:37.117 00:03:37.117 real 0m5.367s 00:03:37.117 user 0m5.144s 00:03:37.117 sys 0m0.261s 00:03:37.117 06:59:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:37.117 06:59:41 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:37.117 ************************************ 00:03:37.117 END TEST skip_rpc 00:03:37.117 ************************************ 00:03:37.117 06:59:41 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:37.117 06:59:41 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:37.117 06:59:41 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:37.117 06:59:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:37.117 ************************************ 00:03:37.117 START TEST skip_rpc_with_json 00:03:37.117 ************************************ 00:03:37.117 06:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:03:37.117 06:59:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:37.117 06:59:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=994825 00:03:37.117 06:59:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:37.117 06:59:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:37.117 06:59:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 994825 00:03:37.117 06:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 994825 ']' 00:03:37.117 06:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:37.117 06:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:37.117 06:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:37.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:37.117 06:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:37.117 06:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:37.117 [2024-11-20 06:59:41.549452] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:03:37.117 [2024-11-20 06:59:41.549492] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid994825 ] 00:03:37.117 [2024-11-20 06:59:41.624650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:37.117 [2024-11-20 06:59:41.667248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:37.376 06:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:37.376 06:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:03:37.376 06:59:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:37.376 06:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:37.376 06:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:37.376 [2024-11-20 06:59:41.892204] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:37.376 request: 00:03:37.376 { 00:03:37.376 "trtype": "tcp", 00:03:37.376 "method": "nvmf_get_transports", 00:03:37.376 "req_id": 1 00:03:37.376 } 00:03:37.376 Got JSON-RPC error response 00:03:37.376 response: 00:03:37.376 { 00:03:37.376 "code": -19, 00:03:37.376 "message": "No such device" 00:03:37.376 } 00:03:37.376 06:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:37.376 06:59:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:37.376 06:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:37.376 06:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:37.376 [2024-11-20 06:59:41.904317] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:37.376 06:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:37.376 06:59:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:37.376 06:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:37.376 06:59:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:37.636 06:59:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:37.636 06:59:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:37.636 { 00:03:37.636 "subsystems": [ 00:03:37.636 { 00:03:37.636 "subsystem": "fsdev", 00:03:37.636 "config": [ 00:03:37.636 { 00:03:37.636 "method": "fsdev_set_opts", 00:03:37.636 "params": { 00:03:37.636 "fsdev_io_pool_size": 65535, 00:03:37.636 "fsdev_io_cache_size": 256 00:03:37.636 } 00:03:37.636 } 00:03:37.636 ] 00:03:37.636 }, 00:03:37.636 { 00:03:37.636 "subsystem": "vfio_user_target", 00:03:37.636 "config": null 00:03:37.636 }, 00:03:37.636 { 00:03:37.636 "subsystem": "keyring", 00:03:37.636 "config": [] 00:03:37.636 }, 00:03:37.636 { 00:03:37.636 "subsystem": "iobuf", 00:03:37.636 "config": [ 00:03:37.636 { 00:03:37.636 "method": "iobuf_set_options", 00:03:37.636 "params": { 00:03:37.636 "small_pool_count": 8192, 00:03:37.636 "large_pool_count": 1024, 00:03:37.636 "small_bufsize": 8192, 00:03:37.636 "large_bufsize": 135168, 00:03:37.636 "enable_numa": false 00:03:37.636 } 00:03:37.636 } 00:03:37.636 ] 00:03:37.636 }, 00:03:37.636 { 00:03:37.636 "subsystem": "sock", 00:03:37.636 "config": [ 00:03:37.636 { 00:03:37.636 "method": "sock_set_default_impl", 00:03:37.636 "params": { 00:03:37.636 "impl_name": "posix" 00:03:37.636 } 00:03:37.636 }, 00:03:37.636 { 00:03:37.636 "method": "sock_impl_set_options", 00:03:37.636 "params": { 00:03:37.636 "impl_name": "ssl", 00:03:37.636 "recv_buf_size": 4096, 00:03:37.636 "send_buf_size": 4096, 00:03:37.636 "enable_recv_pipe": true, 00:03:37.636 "enable_quickack": false, 00:03:37.636 "enable_placement_id": 0, 00:03:37.636 "enable_zerocopy_send_server": true, 00:03:37.636 "enable_zerocopy_send_client": false, 00:03:37.636 "zerocopy_threshold": 0, 00:03:37.636 "tls_version": 0, 00:03:37.636 "enable_ktls": false 00:03:37.636 } 00:03:37.636 }, 00:03:37.636 { 00:03:37.636 "method": "sock_impl_set_options", 00:03:37.636 "params": { 00:03:37.636 "impl_name": "posix", 00:03:37.636 "recv_buf_size": 2097152, 00:03:37.636 "send_buf_size": 2097152, 00:03:37.636 "enable_recv_pipe": true, 00:03:37.636 "enable_quickack": false, 00:03:37.636 "enable_placement_id": 0, 00:03:37.636 "enable_zerocopy_send_server": true, 00:03:37.636 "enable_zerocopy_send_client": false, 00:03:37.636 "zerocopy_threshold": 0, 00:03:37.636 "tls_version": 0, 00:03:37.636 "enable_ktls": false 00:03:37.636 } 00:03:37.636 } 00:03:37.636 ] 00:03:37.636 }, 00:03:37.636 { 00:03:37.636 "subsystem": "vmd", 00:03:37.636 "config": [] 00:03:37.636 }, 00:03:37.636 { 00:03:37.636 "subsystem": "accel", 00:03:37.636 "config": [ 00:03:37.636 { 00:03:37.636 "method": "accel_set_options", 00:03:37.636 "params": { 00:03:37.636 "small_cache_size": 128, 00:03:37.636 "large_cache_size": 16, 00:03:37.636 "task_count": 2048, 00:03:37.636 "sequence_count": 2048, 00:03:37.636 "buf_count": 2048 00:03:37.636 } 00:03:37.636 } 00:03:37.636 ] 00:03:37.636 }, 00:03:37.636 { 00:03:37.636 "subsystem": "bdev", 00:03:37.636 "config": [ 00:03:37.636 { 00:03:37.636 "method": "bdev_set_options", 00:03:37.636 "params": { 00:03:37.636 "bdev_io_pool_size": 65535, 00:03:37.636 "bdev_io_cache_size": 256, 00:03:37.636 "bdev_auto_examine": true, 00:03:37.636 "iobuf_small_cache_size": 128, 00:03:37.636 "iobuf_large_cache_size": 16 00:03:37.636 } 00:03:37.636 }, 00:03:37.636 { 00:03:37.636 "method": "bdev_raid_set_options", 00:03:37.636 "params": { 00:03:37.636 "process_window_size_kb": 1024, 00:03:37.636 "process_max_bandwidth_mb_sec": 0 00:03:37.636 } 00:03:37.636 }, 00:03:37.636 { 00:03:37.636 "method": "bdev_iscsi_set_options", 00:03:37.636 "params": { 00:03:37.636 "timeout_sec": 30 00:03:37.636 } 00:03:37.636 }, 00:03:37.636 { 00:03:37.636 "method": "bdev_nvme_set_options", 00:03:37.636 "params": { 00:03:37.636 "action_on_timeout": "none", 00:03:37.636 "timeout_us": 0, 00:03:37.636 "timeout_admin_us": 0, 00:03:37.636 "keep_alive_timeout_ms": 10000, 00:03:37.636 "arbitration_burst": 0, 00:03:37.636 "low_priority_weight": 0, 00:03:37.636 "medium_priority_weight": 0, 00:03:37.636 "high_priority_weight": 0, 00:03:37.636 "nvme_adminq_poll_period_us": 10000, 00:03:37.636 "nvme_ioq_poll_period_us": 0, 00:03:37.636 "io_queue_requests": 0, 00:03:37.636 "delay_cmd_submit": true, 00:03:37.636 "transport_retry_count": 4, 00:03:37.636 "bdev_retry_count": 3, 00:03:37.636 "transport_ack_timeout": 0, 00:03:37.636 "ctrlr_loss_timeout_sec": 0, 00:03:37.636 "reconnect_delay_sec": 0, 00:03:37.636 "fast_io_fail_timeout_sec": 0, 00:03:37.636 "disable_auto_failback": false, 00:03:37.636 "generate_uuids": false, 00:03:37.636 "transport_tos": 0, 00:03:37.636 "nvme_error_stat": false, 00:03:37.636 "rdma_srq_size": 0, 00:03:37.636 "io_path_stat": false, 00:03:37.636 "allow_accel_sequence": false, 00:03:37.636 "rdma_max_cq_size": 0, 00:03:37.636 "rdma_cm_event_timeout_ms": 0, 00:03:37.636 "dhchap_digests": [ 00:03:37.636 "sha256", 00:03:37.636 "sha384", 00:03:37.636 "sha512" 00:03:37.637 ], 00:03:37.637 "dhchap_dhgroups": [ 00:03:37.637 "null", 00:03:37.637 "ffdhe2048", 00:03:37.637 "ffdhe3072", 00:03:37.637 "ffdhe4096", 00:03:37.637 "ffdhe6144", 00:03:37.637 "ffdhe8192" 00:03:37.637 ] 00:03:37.637 } 00:03:37.637 }, 00:03:37.637 { 00:03:37.637 "method": "bdev_nvme_set_hotplug", 00:03:37.637 "params": { 00:03:37.637 "period_us": 100000, 00:03:37.637 "enable": false 00:03:37.637 } 00:03:37.637 }, 00:03:37.637 { 00:03:37.637 "method": "bdev_wait_for_examine" 00:03:37.637 } 00:03:37.637 ] 00:03:37.637 }, 00:03:37.637 { 00:03:37.637 "subsystem": "scsi", 00:03:37.637 "config": null 00:03:37.637 }, 00:03:37.637 { 00:03:37.637 "subsystem": "scheduler", 00:03:37.637 "config": [ 00:03:37.637 { 00:03:37.637 "method": "framework_set_scheduler", 00:03:37.637 "params": { 00:03:37.637 "name": "static" 00:03:37.637 } 00:03:37.637 } 00:03:37.637 ] 00:03:37.637 }, 00:03:37.637 { 00:03:37.637 "subsystem": "vhost_scsi", 00:03:37.637 "config": [] 00:03:37.637 }, 00:03:37.637 { 00:03:37.637 "subsystem": "vhost_blk", 00:03:37.637 "config": [] 00:03:37.637 }, 00:03:37.637 { 00:03:37.637 "subsystem": "ublk", 00:03:37.637 "config": [] 00:03:37.637 }, 00:03:37.637 { 00:03:37.637 "subsystem": "nbd", 00:03:37.637 "config": [] 00:03:37.637 }, 00:03:37.637 { 00:03:37.637 "subsystem": "nvmf", 00:03:37.637 "config": [ 00:03:37.637 { 00:03:37.637 "method": "nvmf_set_config", 00:03:37.637 "params": { 00:03:37.637 "discovery_filter": "match_any", 00:03:37.637 "admin_cmd_passthru": { 00:03:37.637 "identify_ctrlr": false 00:03:37.637 }, 00:03:37.637 "dhchap_digests": [ 00:03:37.637 "sha256", 00:03:37.637 "sha384", 00:03:37.637 "sha512" 00:03:37.637 ], 00:03:37.637 "dhchap_dhgroups": [ 00:03:37.637 "null", 00:03:37.637 "ffdhe2048", 00:03:37.637 "ffdhe3072", 00:03:37.637 "ffdhe4096", 00:03:37.637 "ffdhe6144", 00:03:37.637 "ffdhe8192" 00:03:37.637 ] 00:03:37.637 } 00:03:37.637 }, 00:03:37.637 { 00:03:37.637 "method": "nvmf_set_max_subsystems", 00:03:37.637 "params": { 00:03:37.637 "max_subsystems": 1024 00:03:37.637 } 00:03:37.637 }, 00:03:37.637 { 00:03:37.637 "method": "nvmf_set_crdt", 00:03:37.637 "params": { 00:03:37.637 "crdt1": 0, 00:03:37.637 "crdt2": 0, 00:03:37.637 "crdt3": 0 00:03:37.637 } 00:03:37.637 }, 00:03:37.637 { 00:03:37.637 "method": "nvmf_create_transport", 00:03:37.637 "params": { 00:03:37.637 "trtype": "TCP", 00:03:37.637 "max_queue_depth": 128, 00:03:37.637 "max_io_qpairs_per_ctrlr": 127, 00:03:37.637 "in_capsule_data_size": 4096, 00:03:37.637 "max_io_size": 131072, 00:03:37.637 "io_unit_size": 131072, 00:03:37.637 "max_aq_depth": 128, 00:03:37.637 "num_shared_buffers": 511, 00:03:37.637 "buf_cache_size": 4294967295, 00:03:37.637 "dif_insert_or_strip": false, 00:03:37.637 "zcopy": false, 00:03:37.637 "c2h_success": true, 00:03:37.637 "sock_priority": 0, 00:03:37.637 "abort_timeout_sec": 1, 00:03:37.637 "ack_timeout": 0, 00:03:37.637 "data_wr_pool_size": 0 00:03:37.637 } 00:03:37.637 } 00:03:37.637 ] 00:03:37.637 }, 00:03:37.637 { 00:03:37.637 "subsystem": "iscsi", 00:03:37.637 "config": [ 00:03:37.637 { 00:03:37.637 "method": "iscsi_set_options", 00:03:37.637 "params": { 00:03:37.637 "node_base": "iqn.2016-06.io.spdk", 00:03:37.637 "max_sessions": 128, 00:03:37.637 "max_connections_per_session": 2, 00:03:37.637 "max_queue_depth": 64, 00:03:37.637 "default_time2wait": 2, 00:03:37.637 "default_time2retain": 20, 00:03:37.637 "first_burst_length": 8192, 00:03:37.637 "immediate_data": true, 00:03:37.637 "allow_duplicated_isid": false, 00:03:37.637 "error_recovery_level": 0, 00:03:37.637 "nop_timeout": 60, 00:03:37.637 "nop_in_interval": 30, 00:03:37.637 "disable_chap": false, 00:03:37.637 "require_chap": false, 00:03:37.637 "mutual_chap": false, 00:03:37.637 "chap_group": 0, 00:03:37.637 "max_large_datain_per_connection": 64, 00:03:37.637 "max_r2t_per_connection": 4, 00:03:37.637 "pdu_pool_size": 36864, 00:03:37.637 "immediate_data_pool_size": 16384, 00:03:37.637 "data_out_pool_size": 2048 00:03:37.637 } 00:03:37.637 } 00:03:37.637 ] 00:03:37.637 } 00:03:37.637 ] 00:03:37.637 } 00:03:37.637 06:59:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:37.637 06:59:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 994825 00:03:37.637 06:59:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 994825 ']' 00:03:37.637 06:59:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 994825 00:03:37.637 06:59:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:03:37.637 06:59:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:37.637 06:59:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 994825 00:03:37.637 06:59:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:37.637 06:59:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:37.637 06:59:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 994825' 00:03:37.637 killing process with pid 994825 00:03:37.637 06:59:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 994825 00:03:37.637 06:59:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 994825 00:03:37.896 06:59:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=995054 00:03:37.896 06:59:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:37.896 06:59:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:43.170 06:59:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 995054 00:03:43.170 06:59:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 995054 ']' 00:03:43.170 06:59:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 995054 00:03:43.170 06:59:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:03:43.170 06:59:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:43.170 06:59:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 995054 00:03:43.170 06:59:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:43.170 06:59:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:43.170 06:59:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 995054' 00:03:43.170 killing process with pid 995054 00:03:43.170 06:59:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 995054 00:03:43.170 06:59:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 995054 00:03:43.429 06:59:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:43.429 06:59:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:43.429 00:03:43.429 real 0m6.297s 00:03:43.429 user 0m5.977s 00:03:43.429 sys 0m0.613s 00:03:43.429 06:59:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:43.429 06:59:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:43.429 ************************************ 00:03:43.429 END TEST skip_rpc_with_json 00:03:43.429 ************************************ 00:03:43.429 06:59:47 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:43.429 06:59:47 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:43.429 06:59:47 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:43.429 06:59:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:43.429 ************************************ 00:03:43.429 START TEST skip_rpc_with_delay 00:03:43.429 ************************************ 00:03:43.429 06:59:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:03:43.429 06:59:47 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:43.429 06:59:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:03:43.429 06:59:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:43.429 06:59:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:43.429 06:59:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:43.429 06:59:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:43.429 06:59:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:43.429 06:59:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:43.429 06:59:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:43.429 06:59:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:43.429 06:59:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:43.429 06:59:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:43.429 [2024-11-20 06:59:47.917926] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:43.429 06:59:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:03:43.429 06:59:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:43.429 06:59:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:43.429 06:59:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:43.429 00:03:43.429 real 0m0.069s 00:03:43.429 user 0m0.040s 00:03:43.429 sys 0m0.028s 00:03:43.429 06:59:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:43.429 06:59:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:43.429 ************************************ 00:03:43.429 END TEST skip_rpc_with_delay 00:03:43.429 ************************************ 00:03:43.429 06:59:47 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:43.429 06:59:47 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:43.429 06:59:47 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:43.429 06:59:47 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:43.429 06:59:47 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:43.429 06:59:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:43.688 ************************************ 00:03:43.688 START TEST exit_on_failed_rpc_init 00:03:43.688 ************************************ 00:03:43.688 06:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:03:43.688 06:59:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=996031 00:03:43.688 06:59:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 996031 00:03:43.688 06:59:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:43.689 06:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 996031 ']' 00:03:43.689 06:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:43.689 06:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:43.689 06:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:43.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:43.689 06:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:43.689 06:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:43.689 [2024-11-20 06:59:48.053033] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:03:43.689 [2024-11-20 06:59:48.053076] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid996031 ] 00:03:43.689 [2024-11-20 06:59:48.110515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:43.689 [2024-11-20 06:59:48.153462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:43.947 06:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:43.947 06:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:03:43.947 06:59:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:43.947 06:59:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:43.947 06:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:03:43.947 06:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:43.947 06:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:43.948 06:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:43.948 06:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:43.948 06:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:43.948 06:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:43.948 06:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:43.948 06:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:43.948 06:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:43.948 06:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:43.948 [2024-11-20 06:59:48.426217] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:03:43.948 [2024-11-20 06:59:48.426264] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid996040 ] 00:03:44.207 [2024-11-20 06:59:48.502003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:44.207 [2024-11-20 06:59:48.543098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:44.207 [2024-11-20 06:59:48.543169] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:44.207 [2024-11-20 06:59:48.543179] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:44.207 [2024-11-20 06:59:48.543185] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:44.207 06:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:03:44.207 06:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:44.207 06:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:03:44.207 06:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:03:44.207 06:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:03:44.207 06:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:44.207 06:59:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:44.207 06:59:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 996031 00:03:44.207 06:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 996031 ']' 00:03:44.207 06:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 996031 00:03:44.207 06:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:03:44.207 06:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:44.207 06:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 996031 00:03:44.207 06:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:44.207 06:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:44.207 06:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 996031' 00:03:44.207 killing process with pid 996031 00:03:44.207 06:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 996031 00:03:44.207 06:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 996031 00:03:44.467 00:03:44.467 real 0m0.939s 00:03:44.467 user 0m1.014s 00:03:44.467 sys 0m0.370s 00:03:44.467 06:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:44.467 06:59:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:44.467 ************************************ 00:03:44.467 END TEST exit_on_failed_rpc_init 00:03:44.467 ************************************ 00:03:44.467 06:59:48 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:44.467 00:03:44.467 real 0m13.112s 00:03:44.467 user 0m12.383s 00:03:44.467 sys 0m1.534s 00:03:44.467 06:59:48 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:44.467 06:59:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:44.467 ************************************ 00:03:44.467 END TEST skip_rpc 00:03:44.467 ************************************ 00:03:44.467 06:59:49 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:44.467 06:59:49 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:44.467 06:59:49 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:44.467 06:59:49 -- common/autotest_common.sh@10 -- # set +x 00:03:44.727 ************************************ 00:03:44.727 START TEST rpc_client 00:03:44.727 ************************************ 00:03:44.727 06:59:49 rpc_client -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:44.727 * Looking for test storage... 00:03:44.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:44.727 06:59:49 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:44.727 06:59:49 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:03:44.727 06:59:49 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:44.727 06:59:49 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:44.727 06:59:49 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:44.727 06:59:49 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:44.727 06:59:49 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:44.727 06:59:49 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:03:44.727 06:59:49 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:03:44.727 06:59:49 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:03:44.727 06:59:49 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:03:44.727 06:59:49 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:03:44.727 06:59:49 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:03:44.727 06:59:49 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:03:44.727 06:59:49 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:44.727 06:59:49 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:03:44.727 06:59:49 rpc_client -- scripts/common.sh@345 -- # : 1 00:03:44.727 06:59:49 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:44.727 06:59:49 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:44.727 06:59:49 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:03:44.727 06:59:49 rpc_client -- scripts/common.sh@353 -- # local d=1 00:03:44.727 06:59:49 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:44.727 06:59:49 rpc_client -- scripts/common.sh@355 -- # echo 1 00:03:44.727 06:59:49 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:03:44.727 06:59:49 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:03:44.727 06:59:49 rpc_client -- scripts/common.sh@353 -- # local d=2 00:03:44.727 06:59:49 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:44.727 06:59:49 rpc_client -- scripts/common.sh@355 -- # echo 2 00:03:44.727 06:59:49 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:03:44.727 06:59:49 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:44.727 06:59:49 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:44.727 06:59:49 rpc_client -- scripts/common.sh@368 -- # return 0 00:03:44.727 06:59:49 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:44.727 06:59:49 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:44.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.727 --rc genhtml_branch_coverage=1 00:03:44.727 --rc genhtml_function_coverage=1 00:03:44.727 --rc genhtml_legend=1 00:03:44.727 --rc geninfo_all_blocks=1 00:03:44.727 --rc geninfo_unexecuted_blocks=1 00:03:44.727 00:03:44.727 ' 00:03:44.727 06:59:49 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:44.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.727 --rc genhtml_branch_coverage=1 00:03:44.727 --rc genhtml_function_coverage=1 00:03:44.727 --rc genhtml_legend=1 00:03:44.727 --rc geninfo_all_blocks=1 00:03:44.727 --rc geninfo_unexecuted_blocks=1 00:03:44.727 00:03:44.727 ' 00:03:44.727 06:59:49 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:44.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.727 --rc genhtml_branch_coverage=1 00:03:44.727 --rc genhtml_function_coverage=1 00:03:44.727 --rc genhtml_legend=1 00:03:44.727 --rc geninfo_all_blocks=1 00:03:44.727 --rc geninfo_unexecuted_blocks=1 00:03:44.727 00:03:44.727 ' 00:03:44.727 06:59:49 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:44.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.727 --rc genhtml_branch_coverage=1 00:03:44.727 --rc genhtml_function_coverage=1 00:03:44.727 --rc genhtml_legend=1 00:03:44.727 --rc geninfo_all_blocks=1 00:03:44.727 --rc geninfo_unexecuted_blocks=1 00:03:44.727 00:03:44.727 ' 00:03:44.727 06:59:49 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:44.727 OK 00:03:44.727 06:59:49 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:44.727 00:03:44.727 real 0m0.200s 00:03:44.727 user 0m0.124s 00:03:44.727 sys 0m0.090s 00:03:44.727 06:59:49 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:03:44.727 06:59:49 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:44.727 ************************************ 00:03:44.727 END TEST rpc_client 00:03:44.727 ************************************ 00:03:44.987 06:59:49 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:44.987 06:59:49 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:03:44.987 06:59:49 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:03:44.987 06:59:49 -- common/autotest_common.sh@10 -- # set +x 00:03:44.987 ************************************ 00:03:44.987 START TEST json_config 00:03:44.987 ************************************ 00:03:44.987 06:59:49 json_config -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:44.987 06:59:49 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:44.987 06:59:49 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:03:44.987 06:59:49 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:44.987 06:59:49 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:44.987 06:59:49 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:44.987 06:59:49 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:44.987 06:59:49 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:44.987 06:59:49 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:03:44.987 06:59:49 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:03:44.987 06:59:49 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:03:44.987 06:59:49 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:03:44.987 06:59:49 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:03:44.987 06:59:49 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:03:44.987 06:59:49 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:03:44.987 06:59:49 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:44.987 06:59:49 json_config -- scripts/common.sh@344 -- # case "$op" in 00:03:44.987 06:59:49 json_config -- scripts/common.sh@345 -- # : 1 00:03:44.987 06:59:49 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:44.987 06:59:49 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:44.987 06:59:49 json_config -- scripts/common.sh@365 -- # decimal 1 00:03:44.987 06:59:49 json_config -- scripts/common.sh@353 -- # local d=1 00:03:44.987 06:59:49 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:44.987 06:59:49 json_config -- scripts/common.sh@355 -- # echo 1 00:03:44.987 06:59:49 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:03:44.987 06:59:49 json_config -- scripts/common.sh@366 -- # decimal 2 00:03:44.987 06:59:49 json_config -- scripts/common.sh@353 -- # local d=2 00:03:44.987 06:59:49 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:44.987 06:59:49 json_config -- scripts/common.sh@355 -- # echo 2 00:03:44.987 06:59:49 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:03:44.987 06:59:49 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:44.987 06:59:49 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:44.987 06:59:49 json_config -- scripts/common.sh@368 -- # return 0 00:03:44.987 06:59:49 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:44.987 06:59:49 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:44.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.987 --rc genhtml_branch_coverage=1 00:03:44.987 --rc genhtml_function_coverage=1 00:03:44.987 --rc genhtml_legend=1 00:03:44.987 --rc geninfo_all_blocks=1 00:03:44.987 --rc geninfo_unexecuted_blocks=1 00:03:44.987 00:03:44.987 ' 00:03:44.987 06:59:49 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:44.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.987 --rc genhtml_branch_coverage=1 00:03:44.987 --rc genhtml_function_coverage=1 00:03:44.987 --rc genhtml_legend=1 00:03:44.987 --rc geninfo_all_blocks=1 00:03:44.987 --rc geninfo_unexecuted_blocks=1 00:03:44.987 00:03:44.987 ' 00:03:44.987 06:59:49 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:44.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.987 --rc genhtml_branch_coverage=1 00:03:44.987 --rc genhtml_function_coverage=1 00:03:44.987 --rc genhtml_legend=1 00:03:44.987 --rc geninfo_all_blocks=1 00:03:44.987 --rc geninfo_unexecuted_blocks=1 00:03:44.987 00:03:44.987 ' 00:03:44.987 06:59:49 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:44.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.987 --rc genhtml_branch_coverage=1 00:03:44.987 --rc genhtml_function_coverage=1 00:03:44.987 --rc genhtml_legend=1 00:03:44.987 --rc geninfo_all_blocks=1 00:03:44.987 --rc geninfo_unexecuted_blocks=1 00:03:44.987 00:03:44.987 ' 00:03:44.987 06:59:49 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:44.987 06:59:49 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:44.987 06:59:49 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:44.987 06:59:49 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:44.987 06:59:49 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:44.987 06:59:49 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:44.987 06:59:49 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:44.987 06:59:49 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:44.987 06:59:49 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:44.987 06:59:49 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:44.987 06:59:49 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:44.987 06:59:49 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:44.987 06:59:49 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:03:44.987 06:59:49 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:03:44.987 06:59:49 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:44.987 06:59:49 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:44.987 06:59:49 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:44.987 06:59:49 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:44.987 06:59:49 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:44.987 06:59:49 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:03:44.987 06:59:49 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:44.987 06:59:49 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:44.987 06:59:49 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:44.987 06:59:49 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.987 06:59:49 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.987 06:59:49 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.987 06:59:49 json_config -- paths/export.sh@5 -- # export PATH 00:03:44.987 06:59:49 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.987 06:59:49 json_config -- nvmf/common.sh@51 -- # : 0 00:03:44.987 06:59:49 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:44.987 06:59:49 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:44.987 06:59:49 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:44.987 06:59:49 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:44.987 06:59:49 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:44.987 06:59:49 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:44.987 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:44.987 06:59:49 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:44.987 06:59:49 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:44.987 06:59:49 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:44.987 06:59:49 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:44.987 06:59:49 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:44.987 06:59:49 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:44.987 06:59:49 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:44.987 06:59:49 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:44.988 06:59:49 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:44.988 06:59:49 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:44.988 06:59:49 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:44.988 06:59:49 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:44.988 06:59:49 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:44.988 06:59:49 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:44.988 06:59:49 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:44.988 06:59:49 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:44.988 06:59:49 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:44.988 06:59:49 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:44.988 06:59:49 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:03:44.988 INFO: JSON configuration test init 00:03:44.988 06:59:49 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:03:44.988 06:59:49 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:03:44.988 06:59:49 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:44.988 06:59:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:44.988 06:59:49 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:03:44.988 06:59:49 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:44.988 06:59:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:44.988 06:59:49 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:03:44.988 06:59:49 json_config -- json_config/common.sh@9 -- # local app=target 00:03:44.988 06:59:49 json_config -- json_config/common.sh@10 -- # shift 00:03:44.988 06:59:49 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:44.988 06:59:49 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:44.988 06:59:49 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:44.988 06:59:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:44.988 06:59:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:44.988 06:59:49 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=996392 00:03:44.988 06:59:49 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:44.988 Waiting for target to run... 00:03:44.988 06:59:49 json_config -- json_config/common.sh@25 -- # waitforlisten 996392 /var/tmp/spdk_tgt.sock 00:03:44.988 06:59:49 json_config -- common/autotest_common.sh@833 -- # '[' -z 996392 ']' 00:03:44.988 06:59:49 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:44.988 06:59:49 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:44.988 06:59:49 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:44.988 06:59:49 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:44.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:44.988 06:59:49 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:44.988 06:59:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:45.247 [2024-11-20 06:59:49.559585] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:03:45.247 [2024-11-20 06:59:49.559631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid996392 ] 00:03:45.505 [2024-11-20 06:59:49.848447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:45.505 [2024-11-20 06:59:49.883120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:46.131 06:59:50 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:46.131 06:59:50 json_config -- common/autotest_common.sh@866 -- # return 0 00:03:46.131 06:59:50 json_config -- json_config/common.sh@26 -- # echo '' 00:03:46.131 00:03:46.131 06:59:50 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:03:46.131 06:59:50 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:03:46.131 06:59:50 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:46.131 06:59:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:46.131 06:59:50 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:03:46.131 06:59:50 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:03:46.131 06:59:50 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:46.131 06:59:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:46.131 06:59:50 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:46.131 06:59:50 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:03:46.131 06:59:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:49.421 06:59:53 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:03:49.421 06:59:53 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:49.421 06:59:53 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:49.421 06:59:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:49.421 06:59:53 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:49.421 06:59:53 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:49.421 06:59:53 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:49.421 06:59:53 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:03:49.421 06:59:53 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:03:49.422 06:59:53 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:03:49.422 06:59:53 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:03:49.422 06:59:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:49.422 06:59:53 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:03:49.422 06:59:53 json_config -- json_config/json_config.sh@51 -- # local get_types 00:03:49.422 06:59:53 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:03:49.422 06:59:53 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:03:49.422 06:59:53 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:03:49.422 06:59:53 json_config -- json_config/json_config.sh@54 -- # sort 00:03:49.422 06:59:53 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:03:49.422 06:59:53 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:03:49.422 06:59:53 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:03:49.422 06:59:53 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:03:49.422 06:59:53 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:49.422 06:59:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:49.422 06:59:53 json_config -- json_config/json_config.sh@62 -- # return 0 00:03:49.422 06:59:53 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:03:49.422 06:59:53 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:03:49.422 06:59:53 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:03:49.422 06:59:53 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:03:49.422 06:59:53 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:03:49.422 06:59:53 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:03:49.422 06:59:53 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:49.422 06:59:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:49.422 06:59:53 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:49.422 06:59:53 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:03:49.422 06:59:53 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:03:49.422 06:59:53 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:49.422 06:59:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:49.422 MallocForNvmf0 00:03:49.681 06:59:53 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:49.681 06:59:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:49.681 MallocForNvmf1 00:03:49.681 06:59:54 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:49.681 06:59:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:49.940 [2024-11-20 06:59:54.351098] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:49.940 06:59:54 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:49.940 06:59:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:50.199 06:59:54 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:50.199 06:59:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:50.458 06:59:54 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:50.458 06:59:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:50.458 06:59:54 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:50.458 06:59:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:50.717 [2024-11-20 06:59:55.149581] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:50.717 06:59:55 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:03:50.717 06:59:55 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:50.717 06:59:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:50.717 06:59:55 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:03:50.717 06:59:55 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:50.717 06:59:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:50.717 06:59:55 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:03:50.717 06:59:55 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:50.717 06:59:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:50.976 MallocBdevForConfigChangeCheck 00:03:50.976 06:59:55 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:03:50.976 06:59:55 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:50.976 06:59:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:50.976 06:59:55 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:03:50.976 06:59:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:51.544 06:59:55 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:03:51.544 INFO: shutting down applications... 00:03:51.544 06:59:55 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:03:51.544 06:59:55 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:03:51.544 06:59:55 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:03:51.544 06:59:55 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:52.923 Calling clear_iscsi_subsystem 00:03:52.923 Calling clear_nvmf_subsystem 00:03:52.923 Calling clear_nbd_subsystem 00:03:52.923 Calling clear_ublk_subsystem 00:03:52.923 Calling clear_vhost_blk_subsystem 00:03:52.923 Calling clear_vhost_scsi_subsystem 00:03:52.923 Calling clear_bdev_subsystem 00:03:52.923 06:59:57 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:03:52.923 06:59:57 json_config -- json_config/json_config.sh@350 -- # count=100 00:03:52.923 06:59:57 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:03:52.923 06:59:57 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:52.923 06:59:57 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:52.923 06:59:57 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:03:53.491 06:59:57 json_config -- json_config/json_config.sh@352 -- # break 00:03:53.491 06:59:57 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:03:53.491 06:59:57 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:03:53.491 06:59:57 json_config -- json_config/common.sh@31 -- # local app=target 00:03:53.491 06:59:57 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:53.491 06:59:57 json_config -- json_config/common.sh@35 -- # [[ -n 996392 ]] 00:03:53.491 06:59:57 json_config -- json_config/common.sh@38 -- # kill -SIGINT 996392 00:03:53.491 06:59:57 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:53.491 06:59:57 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:53.491 06:59:57 json_config -- json_config/common.sh@41 -- # kill -0 996392 00:03:53.491 06:59:57 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:03:53.777 06:59:58 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:03:53.777 06:59:58 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:53.777 06:59:58 json_config -- json_config/common.sh@41 -- # kill -0 996392 00:03:53.777 06:59:58 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:53.777 06:59:58 json_config -- json_config/common.sh@43 -- # break 00:03:53.777 06:59:58 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:53.777 06:59:58 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:53.777 SPDK target shutdown done 00:03:53.777 06:59:58 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:03:53.777 INFO: relaunching applications... 00:03:53.777 06:59:58 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:53.777 06:59:58 json_config -- json_config/common.sh@9 -- # local app=target 00:03:53.777 06:59:58 json_config -- json_config/common.sh@10 -- # shift 00:03:53.777 06:59:58 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:53.777 06:59:58 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:53.777 06:59:58 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:53.777 06:59:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:53.777 06:59:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:53.777 06:59:58 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=997914 00:03:53.777 06:59:58 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:53.777 Waiting for target to run... 00:03:53.777 06:59:58 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:53.777 06:59:58 json_config -- json_config/common.sh@25 -- # waitforlisten 997914 /var/tmp/spdk_tgt.sock 00:03:53.777 06:59:58 json_config -- common/autotest_common.sh@833 -- # '[' -z 997914 ']' 00:03:53.777 06:59:58 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:53.777 06:59:58 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:03:53.777 06:59:58 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:53.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:53.777 06:59:58 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:03:53.777 06:59:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:54.036 [2024-11-20 06:59:58.377212] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:03:54.036 [2024-11-20 06:59:58.377272] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid997914 ] 00:03:54.295 [2024-11-20 06:59:58.842974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:54.554 [2024-11-20 06:59:58.901739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:57.906 [2024-11-20 07:00:01.934974] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:57.906 [2024-11-20 07:00:01.967352] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:58.163 07:00:02 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:03:58.163 07:00:02 json_config -- common/autotest_common.sh@866 -- # return 0 00:03:58.163 07:00:02 json_config -- json_config/common.sh@26 -- # echo '' 00:03:58.164 00:03:58.164 07:00:02 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:03:58.164 07:00:02 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:03:58.164 INFO: Checking if target configuration is the same... 00:03:58.164 07:00:02 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:58.164 07:00:02 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:03:58.164 07:00:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:58.164 + '[' 2 -ne 2 ']' 00:03:58.164 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:58.164 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:58.164 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:58.164 +++ basename /dev/fd/62 00:03:58.164 ++ mktemp /tmp/62.XXX 00:03:58.164 + tmp_file_1=/tmp/62.Ned 00:03:58.164 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:58.164 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:58.164 + tmp_file_2=/tmp/spdk_tgt_config.json.r2w 00:03:58.164 + ret=0 00:03:58.164 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:58.731 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:58.731 + diff -u /tmp/62.Ned /tmp/spdk_tgt_config.json.r2w 00:03:58.731 + echo 'INFO: JSON config files are the same' 00:03:58.731 INFO: JSON config files are the same 00:03:58.731 + rm /tmp/62.Ned /tmp/spdk_tgt_config.json.r2w 00:03:58.731 + exit 0 00:03:58.731 07:00:03 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:03:58.731 07:00:03 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:03:58.731 INFO: changing configuration and checking if this can be detected... 00:03:58.731 07:00:03 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:58.731 07:00:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:58.731 07:00:03 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:58.731 07:00:03 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:03:58.731 07:00:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:58.731 + '[' 2 -ne 2 ']' 00:03:58.731 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:58.731 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:58.731 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:58.731 +++ basename /dev/fd/62 00:03:58.731 ++ mktemp /tmp/62.XXX 00:03:58.731 + tmp_file_1=/tmp/62.X1b 00:03:58.731 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:58.731 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:58.731 + tmp_file_2=/tmp/spdk_tgt_config.json.u5W 00:03:58.731 + ret=0 00:03:58.731 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:59.300 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:59.300 + diff -u /tmp/62.X1b /tmp/spdk_tgt_config.json.u5W 00:03:59.300 + ret=1 00:03:59.300 + echo '=== Start of file: /tmp/62.X1b ===' 00:03:59.300 + cat /tmp/62.X1b 00:03:59.300 + echo '=== End of file: /tmp/62.X1b ===' 00:03:59.300 + echo '' 00:03:59.300 + echo '=== Start of file: /tmp/spdk_tgt_config.json.u5W ===' 00:03:59.300 + cat /tmp/spdk_tgt_config.json.u5W 00:03:59.300 + echo '=== End of file: /tmp/spdk_tgt_config.json.u5W ===' 00:03:59.300 + echo '' 00:03:59.300 + rm /tmp/62.X1b /tmp/spdk_tgt_config.json.u5W 00:03:59.300 + exit 1 00:03:59.300 07:00:03 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:03:59.300 INFO: configuration change detected. 00:03:59.300 07:00:03 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:03:59.300 07:00:03 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:03:59.300 07:00:03 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:59.300 07:00:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:59.300 07:00:03 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:03:59.300 07:00:03 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:03:59.300 07:00:03 json_config -- json_config/json_config.sh@324 -- # [[ -n 997914 ]] 00:03:59.300 07:00:03 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:03:59.300 07:00:03 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:03:59.300 07:00:03 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:59.300 07:00:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:59.300 07:00:03 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:03:59.300 07:00:03 json_config -- json_config/json_config.sh@200 -- # uname -s 00:03:59.300 07:00:03 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:03:59.300 07:00:03 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:03:59.300 07:00:03 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:03:59.300 07:00:03 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:03:59.300 07:00:03 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:59.300 07:00:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:59.300 07:00:03 json_config -- json_config/json_config.sh@330 -- # killprocess 997914 00:03:59.300 07:00:03 json_config -- common/autotest_common.sh@952 -- # '[' -z 997914 ']' 00:03:59.300 07:00:03 json_config -- common/autotest_common.sh@956 -- # kill -0 997914 00:03:59.300 07:00:03 json_config -- common/autotest_common.sh@957 -- # uname 00:03:59.300 07:00:03 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:03:59.300 07:00:03 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 997914 00:03:59.300 07:00:03 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:03:59.300 07:00:03 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:03:59.300 07:00:03 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 997914' 00:03:59.300 killing process with pid 997914 00:03:59.300 07:00:03 json_config -- common/autotest_common.sh@971 -- # kill 997914 00:03:59.300 07:00:03 json_config -- common/autotest_common.sh@976 -- # wait 997914 00:04:01.205 07:00:05 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:01.205 07:00:05 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:01.205 07:00:05 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:01.205 07:00:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:01.205 07:00:05 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:01.205 07:00:05 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:01.205 INFO: Success 00:04:01.205 00:04:01.205 real 0m15.977s 00:04:01.205 user 0m16.631s 00:04:01.205 sys 0m2.602s 00:04:01.205 07:00:05 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:01.205 07:00:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:01.205 ************************************ 00:04:01.205 END TEST json_config 00:04:01.205 ************************************ 00:04:01.205 07:00:05 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:01.205 07:00:05 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:01.205 07:00:05 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:01.205 07:00:05 -- common/autotest_common.sh@10 -- # set +x 00:04:01.205 ************************************ 00:04:01.205 START TEST json_config_extra_key 00:04:01.205 ************************************ 00:04:01.205 07:00:05 json_config_extra_key -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:01.205 07:00:05 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:01.205 07:00:05 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:04:01.205 07:00:05 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:01.205 07:00:05 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:01.205 07:00:05 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:01.205 07:00:05 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:01.205 07:00:05 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:01.205 07:00:05 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:01.205 07:00:05 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:01.205 07:00:05 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:01.205 07:00:05 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:01.205 07:00:05 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:01.205 07:00:05 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:01.205 07:00:05 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:01.205 07:00:05 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:01.205 07:00:05 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:01.205 07:00:05 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:01.205 07:00:05 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:01.205 07:00:05 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:01.205 07:00:05 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:01.205 07:00:05 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:01.205 07:00:05 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:01.205 07:00:05 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:01.205 07:00:05 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:01.205 07:00:05 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:01.205 07:00:05 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:01.205 07:00:05 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:01.205 07:00:05 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:01.205 07:00:05 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:01.205 07:00:05 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:01.205 07:00:05 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:01.205 07:00:05 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:01.205 07:00:05 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:01.205 07:00:05 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:01.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.206 --rc genhtml_branch_coverage=1 00:04:01.206 --rc genhtml_function_coverage=1 00:04:01.206 --rc genhtml_legend=1 00:04:01.206 --rc geninfo_all_blocks=1 00:04:01.206 --rc geninfo_unexecuted_blocks=1 00:04:01.206 00:04:01.206 ' 00:04:01.206 07:00:05 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:01.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.206 --rc genhtml_branch_coverage=1 00:04:01.206 --rc genhtml_function_coverage=1 00:04:01.206 --rc genhtml_legend=1 00:04:01.206 --rc geninfo_all_blocks=1 00:04:01.206 --rc geninfo_unexecuted_blocks=1 00:04:01.206 00:04:01.206 ' 00:04:01.206 07:00:05 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:01.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.206 --rc genhtml_branch_coverage=1 00:04:01.206 --rc genhtml_function_coverage=1 00:04:01.206 --rc genhtml_legend=1 00:04:01.206 --rc geninfo_all_blocks=1 00:04:01.206 --rc geninfo_unexecuted_blocks=1 00:04:01.206 00:04:01.206 ' 00:04:01.206 07:00:05 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:01.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.206 --rc genhtml_branch_coverage=1 00:04:01.206 --rc genhtml_function_coverage=1 00:04:01.206 --rc genhtml_legend=1 00:04:01.206 --rc geninfo_all_blocks=1 00:04:01.206 --rc geninfo_unexecuted_blocks=1 00:04:01.206 00:04:01.206 ' 00:04:01.206 07:00:05 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:01.206 07:00:05 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:01.206 07:00:05 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:01.206 07:00:05 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:01.206 07:00:05 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:01.206 07:00:05 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:01.206 07:00:05 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:01.206 07:00:05 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:01.206 07:00:05 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:01.206 07:00:05 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:01.206 07:00:05 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:01.206 07:00:05 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:01.206 07:00:05 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:01.206 07:00:05 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:01.206 07:00:05 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:01.206 07:00:05 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:01.206 07:00:05 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:01.206 07:00:05 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:01.206 07:00:05 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:01.206 07:00:05 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:01.206 07:00:05 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:01.206 07:00:05 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:01.206 07:00:05 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:01.206 07:00:05 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:01.206 07:00:05 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:01.206 07:00:05 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:01.206 07:00:05 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:01.206 07:00:05 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:01.206 07:00:05 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:01.206 07:00:05 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:01.206 07:00:05 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:01.206 07:00:05 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:01.206 07:00:05 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:01.206 07:00:05 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:01.206 07:00:05 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:01.206 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:01.206 07:00:05 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:01.206 07:00:05 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:01.206 07:00:05 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:01.206 07:00:05 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:01.206 07:00:05 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:01.206 07:00:05 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:01.206 07:00:05 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:01.206 07:00:05 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:01.206 07:00:05 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:01.206 07:00:05 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:01.206 07:00:05 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:01.206 07:00:05 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:01.206 07:00:05 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:01.206 07:00:05 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:01.206 INFO: launching applications... 00:04:01.206 07:00:05 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:01.206 07:00:05 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:01.206 07:00:05 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:01.206 07:00:05 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:01.206 07:00:05 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:01.206 07:00:05 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:01.206 07:00:05 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:01.206 07:00:05 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:01.206 07:00:05 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=999452 00:04:01.206 07:00:05 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:01.206 Waiting for target to run... 00:04:01.206 07:00:05 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 999452 /var/tmp/spdk_tgt.sock 00:04:01.206 07:00:05 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 999452 ']' 00:04:01.206 07:00:05 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:01.206 07:00:05 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:01.206 07:00:05 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:01.206 07:00:05 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:01.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:01.206 07:00:05 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:01.206 07:00:05 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:01.206 [2024-11-20 07:00:05.604777] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:04:01.206 [2024-11-20 07:00:05.604829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid999452 ] 00:04:01.777 [2024-11-20 07:00:06.056413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:01.777 [2024-11-20 07:00:06.113085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.038 07:00:06 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:02.038 07:00:06 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:04:02.038 07:00:06 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:02.038 00:04:02.038 07:00:06 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:02.038 INFO: shutting down applications... 00:04:02.038 07:00:06 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:02.038 07:00:06 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:02.038 07:00:06 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:02.038 07:00:06 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 999452 ]] 00:04:02.038 07:00:06 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 999452 00:04:02.038 07:00:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:02.038 07:00:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:02.038 07:00:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 999452 00:04:02.038 07:00:06 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:02.607 07:00:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:02.607 07:00:06 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:02.607 07:00:06 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 999452 00:04:02.607 07:00:06 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:02.607 07:00:06 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:02.607 07:00:06 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:02.607 07:00:06 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:02.607 SPDK target shutdown done 00:04:02.607 07:00:06 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:02.607 Success 00:04:02.607 00:04:02.607 real 0m1.582s 00:04:02.607 user 0m1.220s 00:04:02.607 sys 0m0.558s 00:04:02.607 07:00:06 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:02.607 07:00:06 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:02.607 ************************************ 00:04:02.607 END TEST json_config_extra_key 00:04:02.607 ************************************ 00:04:02.607 07:00:06 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:02.607 07:00:06 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:02.607 07:00:06 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:02.607 07:00:06 -- common/autotest_common.sh@10 -- # set +x 00:04:02.607 ************************************ 00:04:02.607 START TEST alias_rpc 00:04:02.607 ************************************ 00:04:02.607 07:00:07 alias_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:02.607 * Looking for test storage... 00:04:02.607 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:02.607 07:00:07 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:02.607 07:00:07 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:02.607 07:00:07 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:02.868 07:00:07 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:02.868 07:00:07 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:02.868 07:00:07 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:02.868 07:00:07 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:02.868 07:00:07 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:02.868 07:00:07 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:02.868 07:00:07 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:02.868 07:00:07 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:02.868 07:00:07 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:02.868 07:00:07 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:02.868 07:00:07 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:02.868 07:00:07 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:02.868 07:00:07 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:02.868 07:00:07 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:02.868 07:00:07 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:02.868 07:00:07 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:02.868 07:00:07 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:02.868 07:00:07 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:02.868 07:00:07 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:02.868 07:00:07 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:02.868 07:00:07 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:02.868 07:00:07 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:02.868 07:00:07 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:02.868 07:00:07 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:02.868 07:00:07 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:02.868 07:00:07 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:02.868 07:00:07 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:02.868 07:00:07 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:02.868 07:00:07 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:02.868 07:00:07 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:02.868 07:00:07 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:02.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.868 --rc genhtml_branch_coverage=1 00:04:02.868 --rc genhtml_function_coverage=1 00:04:02.868 --rc genhtml_legend=1 00:04:02.868 --rc geninfo_all_blocks=1 00:04:02.868 --rc geninfo_unexecuted_blocks=1 00:04:02.868 00:04:02.868 ' 00:04:02.868 07:00:07 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:02.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.868 --rc genhtml_branch_coverage=1 00:04:02.868 --rc genhtml_function_coverage=1 00:04:02.868 --rc genhtml_legend=1 00:04:02.868 --rc geninfo_all_blocks=1 00:04:02.868 --rc geninfo_unexecuted_blocks=1 00:04:02.868 00:04:02.868 ' 00:04:02.868 07:00:07 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:02.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.868 --rc genhtml_branch_coverage=1 00:04:02.868 --rc genhtml_function_coverage=1 00:04:02.868 --rc genhtml_legend=1 00:04:02.868 --rc geninfo_all_blocks=1 00:04:02.868 --rc geninfo_unexecuted_blocks=1 00:04:02.868 00:04:02.868 ' 00:04:02.868 07:00:07 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:02.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.868 --rc genhtml_branch_coverage=1 00:04:02.868 --rc genhtml_function_coverage=1 00:04:02.868 --rc genhtml_legend=1 00:04:02.868 --rc geninfo_all_blocks=1 00:04:02.868 --rc geninfo_unexecuted_blocks=1 00:04:02.868 00:04:02.868 ' 00:04:02.868 07:00:07 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:02.868 07:00:07 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=999830 00:04:02.868 07:00:07 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 999830 00:04:02.868 07:00:07 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:02.868 07:00:07 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 999830 ']' 00:04:02.868 07:00:07 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:02.868 07:00:07 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:02.868 07:00:07 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:02.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:02.868 07:00:07 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:02.868 07:00:07 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.868 [2024-11-20 07:00:07.253741] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:04:02.868 [2024-11-20 07:00:07.253792] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid999830 ] 00:04:02.868 [2024-11-20 07:00:07.329361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.868 [2024-11-20 07:00:07.370388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.128 07:00:07 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:03.128 07:00:07 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:03.128 07:00:07 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:03.386 07:00:07 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 999830 00:04:03.387 07:00:07 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 999830 ']' 00:04:03.387 07:00:07 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 999830 00:04:03.387 07:00:07 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:04:03.387 07:00:07 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:03.387 07:00:07 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 999830 00:04:03.387 07:00:07 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:03.387 07:00:07 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:03.387 07:00:07 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 999830' 00:04:03.387 killing process with pid 999830 00:04:03.387 07:00:07 alias_rpc -- common/autotest_common.sh@971 -- # kill 999830 00:04:03.387 07:00:07 alias_rpc -- common/autotest_common.sh@976 -- # wait 999830 00:04:03.646 00:04:03.646 real 0m1.155s 00:04:03.646 user 0m1.152s 00:04:03.646 sys 0m0.437s 00:04:03.646 07:00:08 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:03.646 07:00:08 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.646 ************************************ 00:04:03.646 END TEST alias_rpc 00:04:03.646 ************************************ 00:04:03.906 07:00:08 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:03.906 07:00:08 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:03.906 07:00:08 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:03.906 07:00:08 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:03.906 07:00:08 -- common/autotest_common.sh@10 -- # set +x 00:04:03.906 ************************************ 00:04:03.906 START TEST spdkcli_tcp 00:04:03.906 ************************************ 00:04:03.906 07:00:08 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:03.906 * Looking for test storage... 00:04:03.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:03.906 07:00:08 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:03.906 07:00:08 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:03.906 07:00:08 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:03.906 07:00:08 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:03.906 07:00:08 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:03.906 07:00:08 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:03.906 07:00:08 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:03.906 07:00:08 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:03.906 07:00:08 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:03.906 07:00:08 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:03.906 07:00:08 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:03.906 07:00:08 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:03.906 07:00:08 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:03.906 07:00:08 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:03.906 07:00:08 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:03.906 07:00:08 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:03.906 07:00:08 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:03.906 07:00:08 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:03.906 07:00:08 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:03.906 07:00:08 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:03.906 07:00:08 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:03.906 07:00:08 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:03.906 07:00:08 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:03.906 07:00:08 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:03.906 07:00:08 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:03.906 07:00:08 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:03.906 07:00:08 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:03.906 07:00:08 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:03.906 07:00:08 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:03.906 07:00:08 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:03.906 07:00:08 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:03.906 07:00:08 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:03.906 07:00:08 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:03.906 07:00:08 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:03.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.906 --rc genhtml_branch_coverage=1 00:04:03.906 --rc genhtml_function_coverage=1 00:04:03.906 --rc genhtml_legend=1 00:04:03.906 --rc geninfo_all_blocks=1 00:04:03.906 --rc geninfo_unexecuted_blocks=1 00:04:03.906 00:04:03.906 ' 00:04:03.906 07:00:08 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:03.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.906 --rc genhtml_branch_coverage=1 00:04:03.906 --rc genhtml_function_coverage=1 00:04:03.906 --rc genhtml_legend=1 00:04:03.906 --rc geninfo_all_blocks=1 00:04:03.906 --rc geninfo_unexecuted_blocks=1 00:04:03.906 00:04:03.906 ' 00:04:03.906 07:00:08 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:03.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.906 --rc genhtml_branch_coverage=1 00:04:03.906 --rc genhtml_function_coverage=1 00:04:03.906 --rc genhtml_legend=1 00:04:03.906 --rc geninfo_all_blocks=1 00:04:03.906 --rc geninfo_unexecuted_blocks=1 00:04:03.906 00:04:03.906 ' 00:04:03.906 07:00:08 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:03.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:03.906 --rc genhtml_branch_coverage=1 00:04:03.906 --rc genhtml_function_coverage=1 00:04:03.906 --rc genhtml_legend=1 00:04:03.906 --rc geninfo_all_blocks=1 00:04:03.906 --rc geninfo_unexecuted_blocks=1 00:04:03.906 00:04:03.906 ' 00:04:03.906 07:00:08 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:03.906 07:00:08 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:03.906 07:00:08 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:03.906 07:00:08 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:03.906 07:00:08 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:03.906 07:00:08 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:03.906 07:00:08 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:03.906 07:00:08 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:03.906 07:00:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:03.906 07:00:08 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:03.906 07:00:08 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1000224 00:04:03.906 07:00:08 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1000224 00:04:03.906 07:00:08 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 1000224 ']' 00:04:03.906 07:00:08 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:03.906 07:00:08 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:03.906 07:00:08 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:03.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:03.906 07:00:08 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:03.906 07:00:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:04.166 [2024-11-20 07:00:08.473435] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:04:04.166 [2024-11-20 07:00:08.473488] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1000224 ] 00:04:04.166 [2024-11-20 07:00:08.546561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:04.166 [2024-11-20 07:00:08.589423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:04.166 [2024-11-20 07:00:08.589425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.426 07:00:08 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:04.426 07:00:08 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:04:04.426 07:00:08 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1000292 00:04:04.426 07:00:08 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:04.426 07:00:08 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:04.686 [ 00:04:04.686 "bdev_malloc_delete", 00:04:04.686 "bdev_malloc_create", 00:04:04.686 "bdev_null_resize", 00:04:04.686 "bdev_null_delete", 00:04:04.686 "bdev_null_create", 00:04:04.686 "bdev_nvme_cuse_unregister", 00:04:04.686 "bdev_nvme_cuse_register", 00:04:04.686 "bdev_opal_new_user", 00:04:04.686 "bdev_opal_set_lock_state", 00:04:04.686 "bdev_opal_delete", 00:04:04.686 "bdev_opal_get_info", 00:04:04.686 "bdev_opal_create", 00:04:04.686 "bdev_nvme_opal_revert", 00:04:04.686 "bdev_nvme_opal_init", 00:04:04.686 "bdev_nvme_send_cmd", 00:04:04.686 "bdev_nvme_set_keys", 00:04:04.686 "bdev_nvme_get_path_iostat", 00:04:04.686 "bdev_nvme_get_mdns_discovery_info", 00:04:04.686 "bdev_nvme_stop_mdns_discovery", 00:04:04.686 "bdev_nvme_start_mdns_discovery", 00:04:04.686 "bdev_nvme_set_multipath_policy", 00:04:04.686 "bdev_nvme_set_preferred_path", 00:04:04.686 "bdev_nvme_get_io_paths", 00:04:04.686 "bdev_nvme_remove_error_injection", 00:04:04.686 "bdev_nvme_add_error_injection", 00:04:04.686 "bdev_nvme_get_discovery_info", 00:04:04.686 "bdev_nvme_stop_discovery", 00:04:04.686 "bdev_nvme_start_discovery", 00:04:04.686 "bdev_nvme_get_controller_health_info", 00:04:04.686 "bdev_nvme_disable_controller", 00:04:04.686 "bdev_nvme_enable_controller", 00:04:04.686 "bdev_nvme_reset_controller", 00:04:04.686 "bdev_nvme_get_transport_statistics", 00:04:04.686 "bdev_nvme_apply_firmware", 00:04:04.686 "bdev_nvme_detach_controller", 00:04:04.686 "bdev_nvme_get_controllers", 00:04:04.686 "bdev_nvme_attach_controller", 00:04:04.686 "bdev_nvme_set_hotplug", 00:04:04.686 "bdev_nvme_set_options", 00:04:04.686 "bdev_passthru_delete", 00:04:04.686 "bdev_passthru_create", 00:04:04.686 "bdev_lvol_set_parent_bdev", 00:04:04.686 "bdev_lvol_set_parent", 00:04:04.686 "bdev_lvol_check_shallow_copy", 00:04:04.686 "bdev_lvol_start_shallow_copy", 00:04:04.686 "bdev_lvol_grow_lvstore", 00:04:04.686 "bdev_lvol_get_lvols", 00:04:04.686 "bdev_lvol_get_lvstores", 00:04:04.686 "bdev_lvol_delete", 00:04:04.686 "bdev_lvol_set_read_only", 00:04:04.686 "bdev_lvol_resize", 00:04:04.686 "bdev_lvol_decouple_parent", 00:04:04.686 "bdev_lvol_inflate", 00:04:04.686 "bdev_lvol_rename", 00:04:04.686 "bdev_lvol_clone_bdev", 00:04:04.686 "bdev_lvol_clone", 00:04:04.686 "bdev_lvol_snapshot", 00:04:04.686 "bdev_lvol_create", 00:04:04.686 "bdev_lvol_delete_lvstore", 00:04:04.686 "bdev_lvol_rename_lvstore", 00:04:04.686 "bdev_lvol_create_lvstore", 00:04:04.686 "bdev_raid_set_options", 00:04:04.686 "bdev_raid_remove_base_bdev", 00:04:04.686 "bdev_raid_add_base_bdev", 00:04:04.686 "bdev_raid_delete", 00:04:04.686 "bdev_raid_create", 00:04:04.686 "bdev_raid_get_bdevs", 00:04:04.686 "bdev_error_inject_error", 00:04:04.686 "bdev_error_delete", 00:04:04.686 "bdev_error_create", 00:04:04.686 "bdev_split_delete", 00:04:04.686 "bdev_split_create", 00:04:04.686 "bdev_delay_delete", 00:04:04.686 "bdev_delay_create", 00:04:04.686 "bdev_delay_update_latency", 00:04:04.686 "bdev_zone_block_delete", 00:04:04.686 "bdev_zone_block_create", 00:04:04.686 "blobfs_create", 00:04:04.686 "blobfs_detect", 00:04:04.686 "blobfs_set_cache_size", 00:04:04.686 "bdev_aio_delete", 00:04:04.686 "bdev_aio_rescan", 00:04:04.686 "bdev_aio_create", 00:04:04.686 "bdev_ftl_set_property", 00:04:04.686 "bdev_ftl_get_properties", 00:04:04.686 "bdev_ftl_get_stats", 00:04:04.686 "bdev_ftl_unmap", 00:04:04.686 "bdev_ftl_unload", 00:04:04.686 "bdev_ftl_delete", 00:04:04.686 "bdev_ftl_load", 00:04:04.686 "bdev_ftl_create", 00:04:04.686 "bdev_virtio_attach_controller", 00:04:04.686 "bdev_virtio_scsi_get_devices", 00:04:04.686 "bdev_virtio_detach_controller", 00:04:04.686 "bdev_virtio_blk_set_hotplug", 00:04:04.686 "bdev_iscsi_delete", 00:04:04.686 "bdev_iscsi_create", 00:04:04.686 "bdev_iscsi_set_options", 00:04:04.686 "accel_error_inject_error", 00:04:04.686 "ioat_scan_accel_module", 00:04:04.686 "dsa_scan_accel_module", 00:04:04.686 "iaa_scan_accel_module", 00:04:04.686 "vfu_virtio_create_fs_endpoint", 00:04:04.686 "vfu_virtio_create_scsi_endpoint", 00:04:04.687 "vfu_virtio_scsi_remove_target", 00:04:04.687 "vfu_virtio_scsi_add_target", 00:04:04.687 "vfu_virtio_create_blk_endpoint", 00:04:04.687 "vfu_virtio_delete_endpoint", 00:04:04.687 "keyring_file_remove_key", 00:04:04.687 "keyring_file_add_key", 00:04:04.687 "keyring_linux_set_options", 00:04:04.687 "fsdev_aio_delete", 00:04:04.687 "fsdev_aio_create", 00:04:04.687 "iscsi_get_histogram", 00:04:04.687 "iscsi_enable_histogram", 00:04:04.687 "iscsi_set_options", 00:04:04.687 "iscsi_get_auth_groups", 00:04:04.687 "iscsi_auth_group_remove_secret", 00:04:04.687 "iscsi_auth_group_add_secret", 00:04:04.687 "iscsi_delete_auth_group", 00:04:04.687 "iscsi_create_auth_group", 00:04:04.687 "iscsi_set_discovery_auth", 00:04:04.687 "iscsi_get_options", 00:04:04.687 "iscsi_target_node_request_logout", 00:04:04.687 "iscsi_target_node_set_redirect", 00:04:04.687 "iscsi_target_node_set_auth", 00:04:04.687 "iscsi_target_node_add_lun", 00:04:04.687 "iscsi_get_stats", 00:04:04.687 "iscsi_get_connections", 00:04:04.687 "iscsi_portal_group_set_auth", 00:04:04.687 "iscsi_start_portal_group", 00:04:04.687 "iscsi_delete_portal_group", 00:04:04.687 "iscsi_create_portal_group", 00:04:04.687 "iscsi_get_portal_groups", 00:04:04.687 "iscsi_delete_target_node", 00:04:04.687 "iscsi_target_node_remove_pg_ig_maps", 00:04:04.687 "iscsi_target_node_add_pg_ig_maps", 00:04:04.687 "iscsi_create_target_node", 00:04:04.687 "iscsi_get_target_nodes", 00:04:04.687 "iscsi_delete_initiator_group", 00:04:04.687 "iscsi_initiator_group_remove_initiators", 00:04:04.687 "iscsi_initiator_group_add_initiators", 00:04:04.687 "iscsi_create_initiator_group", 00:04:04.687 "iscsi_get_initiator_groups", 00:04:04.687 "nvmf_set_crdt", 00:04:04.687 "nvmf_set_config", 00:04:04.687 "nvmf_set_max_subsystems", 00:04:04.687 "nvmf_stop_mdns_prr", 00:04:04.687 "nvmf_publish_mdns_prr", 00:04:04.687 "nvmf_subsystem_get_listeners", 00:04:04.687 "nvmf_subsystem_get_qpairs", 00:04:04.687 "nvmf_subsystem_get_controllers", 00:04:04.687 "nvmf_get_stats", 00:04:04.687 "nvmf_get_transports", 00:04:04.687 "nvmf_create_transport", 00:04:04.687 "nvmf_get_targets", 00:04:04.687 "nvmf_delete_target", 00:04:04.687 "nvmf_create_target", 00:04:04.687 "nvmf_subsystem_allow_any_host", 00:04:04.687 "nvmf_subsystem_set_keys", 00:04:04.687 "nvmf_subsystem_remove_host", 00:04:04.687 "nvmf_subsystem_add_host", 00:04:04.687 "nvmf_ns_remove_host", 00:04:04.687 "nvmf_ns_add_host", 00:04:04.687 "nvmf_subsystem_remove_ns", 00:04:04.687 "nvmf_subsystem_set_ns_ana_group", 00:04:04.687 "nvmf_subsystem_add_ns", 00:04:04.687 "nvmf_subsystem_listener_set_ana_state", 00:04:04.687 "nvmf_discovery_get_referrals", 00:04:04.687 "nvmf_discovery_remove_referral", 00:04:04.687 "nvmf_discovery_add_referral", 00:04:04.687 "nvmf_subsystem_remove_listener", 00:04:04.687 "nvmf_subsystem_add_listener", 00:04:04.687 "nvmf_delete_subsystem", 00:04:04.687 "nvmf_create_subsystem", 00:04:04.687 "nvmf_get_subsystems", 00:04:04.687 "env_dpdk_get_mem_stats", 00:04:04.687 "nbd_get_disks", 00:04:04.687 "nbd_stop_disk", 00:04:04.687 "nbd_start_disk", 00:04:04.687 "ublk_recover_disk", 00:04:04.687 "ublk_get_disks", 00:04:04.687 "ublk_stop_disk", 00:04:04.687 "ublk_start_disk", 00:04:04.687 "ublk_destroy_target", 00:04:04.687 "ublk_create_target", 00:04:04.687 "virtio_blk_create_transport", 00:04:04.687 "virtio_blk_get_transports", 00:04:04.687 "vhost_controller_set_coalescing", 00:04:04.687 "vhost_get_controllers", 00:04:04.687 "vhost_delete_controller", 00:04:04.687 "vhost_create_blk_controller", 00:04:04.687 "vhost_scsi_controller_remove_target", 00:04:04.687 "vhost_scsi_controller_add_target", 00:04:04.687 "vhost_start_scsi_controller", 00:04:04.687 "vhost_create_scsi_controller", 00:04:04.687 "thread_set_cpumask", 00:04:04.687 "scheduler_set_options", 00:04:04.687 "framework_get_governor", 00:04:04.687 "framework_get_scheduler", 00:04:04.687 "framework_set_scheduler", 00:04:04.687 "framework_get_reactors", 00:04:04.687 "thread_get_io_channels", 00:04:04.687 "thread_get_pollers", 00:04:04.687 "thread_get_stats", 00:04:04.687 "framework_monitor_context_switch", 00:04:04.687 "spdk_kill_instance", 00:04:04.687 "log_enable_timestamps", 00:04:04.687 "log_get_flags", 00:04:04.687 "log_clear_flag", 00:04:04.687 "log_set_flag", 00:04:04.687 "log_get_level", 00:04:04.687 "log_set_level", 00:04:04.687 "log_get_print_level", 00:04:04.687 "log_set_print_level", 00:04:04.687 "framework_enable_cpumask_locks", 00:04:04.687 "framework_disable_cpumask_locks", 00:04:04.687 "framework_wait_init", 00:04:04.687 "framework_start_init", 00:04:04.687 "scsi_get_devices", 00:04:04.687 "bdev_get_histogram", 00:04:04.687 "bdev_enable_histogram", 00:04:04.687 "bdev_set_qos_limit", 00:04:04.687 "bdev_set_qd_sampling_period", 00:04:04.687 "bdev_get_bdevs", 00:04:04.687 "bdev_reset_iostat", 00:04:04.687 "bdev_get_iostat", 00:04:04.687 "bdev_examine", 00:04:04.687 "bdev_wait_for_examine", 00:04:04.687 "bdev_set_options", 00:04:04.687 "accel_get_stats", 00:04:04.687 "accel_set_options", 00:04:04.687 "accel_set_driver", 00:04:04.687 "accel_crypto_key_destroy", 00:04:04.687 "accel_crypto_keys_get", 00:04:04.687 "accel_crypto_key_create", 00:04:04.687 "accel_assign_opc", 00:04:04.687 "accel_get_module_info", 00:04:04.687 "accel_get_opc_assignments", 00:04:04.687 "vmd_rescan", 00:04:04.687 "vmd_remove_device", 00:04:04.687 "vmd_enable", 00:04:04.687 "sock_get_default_impl", 00:04:04.687 "sock_set_default_impl", 00:04:04.687 "sock_impl_set_options", 00:04:04.687 "sock_impl_get_options", 00:04:04.687 "iobuf_get_stats", 00:04:04.687 "iobuf_set_options", 00:04:04.687 "keyring_get_keys", 00:04:04.687 "vfu_tgt_set_base_path", 00:04:04.687 "framework_get_pci_devices", 00:04:04.687 "framework_get_config", 00:04:04.687 "framework_get_subsystems", 00:04:04.687 "fsdev_set_opts", 00:04:04.687 "fsdev_get_opts", 00:04:04.687 "trace_get_info", 00:04:04.687 "trace_get_tpoint_group_mask", 00:04:04.687 "trace_disable_tpoint_group", 00:04:04.687 "trace_enable_tpoint_group", 00:04:04.687 "trace_clear_tpoint_mask", 00:04:04.687 "trace_set_tpoint_mask", 00:04:04.687 "notify_get_notifications", 00:04:04.687 "notify_get_types", 00:04:04.687 "spdk_get_version", 00:04:04.687 "rpc_get_methods" 00:04:04.687 ] 00:04:04.687 07:00:09 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:04.687 07:00:09 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:04.687 07:00:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:04.687 07:00:09 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:04.687 07:00:09 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1000224 00:04:04.687 07:00:09 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 1000224 ']' 00:04:04.687 07:00:09 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 1000224 00:04:04.687 07:00:09 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:04:04.687 07:00:09 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:04.687 07:00:09 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1000224 00:04:04.687 07:00:09 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:04.687 07:00:09 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:04.687 07:00:09 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1000224' 00:04:04.687 killing process with pid 1000224 00:04:04.687 07:00:09 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 1000224 00:04:04.687 07:00:09 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 1000224 00:04:04.946 00:04:04.946 real 0m1.168s 00:04:04.946 user 0m2.006s 00:04:04.946 sys 0m0.424s 00:04:04.946 07:00:09 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:04.946 07:00:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:04.946 ************************************ 00:04:04.946 END TEST spdkcli_tcp 00:04:04.946 ************************************ 00:04:04.946 07:00:09 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:04.946 07:00:09 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:04.946 07:00:09 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:04.946 07:00:09 -- common/autotest_common.sh@10 -- # set +x 00:04:04.946 ************************************ 00:04:04.946 START TEST dpdk_mem_utility 00:04:04.946 ************************************ 00:04:04.946 07:00:09 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:05.205 * Looking for test storage... 00:04:05.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:05.205 07:00:09 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:05.205 07:00:09 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:04:05.205 07:00:09 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:05.205 07:00:09 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:05.205 07:00:09 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:05.205 07:00:09 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:05.205 07:00:09 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:05.205 07:00:09 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:05.205 07:00:09 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:05.205 07:00:09 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:05.205 07:00:09 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:05.205 07:00:09 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:05.205 07:00:09 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:05.205 07:00:09 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:05.205 07:00:09 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:05.205 07:00:09 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:05.205 07:00:09 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:05.205 07:00:09 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:05.205 07:00:09 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:05.205 07:00:09 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:05.205 07:00:09 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:05.205 07:00:09 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:05.205 07:00:09 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:05.205 07:00:09 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:05.205 07:00:09 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:05.205 07:00:09 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:05.205 07:00:09 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:05.205 07:00:09 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:05.205 07:00:09 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:05.205 07:00:09 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:05.205 07:00:09 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:05.205 07:00:09 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:05.205 07:00:09 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:05.205 07:00:09 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:05.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.205 --rc genhtml_branch_coverage=1 00:04:05.205 --rc genhtml_function_coverage=1 00:04:05.205 --rc genhtml_legend=1 00:04:05.205 --rc geninfo_all_blocks=1 00:04:05.205 --rc geninfo_unexecuted_blocks=1 00:04:05.205 00:04:05.205 ' 00:04:05.205 07:00:09 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:05.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.205 --rc genhtml_branch_coverage=1 00:04:05.205 --rc genhtml_function_coverage=1 00:04:05.205 --rc genhtml_legend=1 00:04:05.205 --rc geninfo_all_blocks=1 00:04:05.205 --rc geninfo_unexecuted_blocks=1 00:04:05.205 00:04:05.205 ' 00:04:05.205 07:00:09 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:05.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.205 --rc genhtml_branch_coverage=1 00:04:05.205 --rc genhtml_function_coverage=1 00:04:05.205 --rc genhtml_legend=1 00:04:05.205 --rc geninfo_all_blocks=1 00:04:05.205 --rc geninfo_unexecuted_blocks=1 00:04:05.205 00:04:05.205 ' 00:04:05.205 07:00:09 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:05.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.205 --rc genhtml_branch_coverage=1 00:04:05.205 --rc genhtml_function_coverage=1 00:04:05.205 --rc genhtml_legend=1 00:04:05.205 --rc geninfo_all_blocks=1 00:04:05.205 --rc geninfo_unexecuted_blocks=1 00:04:05.205 00:04:05.205 ' 00:04:05.205 07:00:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:05.205 07:00:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:05.205 07:00:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1000630 00:04:05.205 07:00:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1000630 00:04:05.205 07:00:09 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 1000630 ']' 00:04:05.205 07:00:09 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:05.205 07:00:09 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:05.205 07:00:09 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:05.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:05.205 07:00:09 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:05.205 07:00:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:05.205 [2024-11-20 07:00:09.701294] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:04:05.205 [2024-11-20 07:00:09.701345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1000630 ] 00:04:05.464 [2024-11-20 07:00:09.778441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.464 [2024-11-20 07:00:09.821410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.723 07:00:10 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:05.723 07:00:10 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:04:05.724 07:00:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:05.724 07:00:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:05.724 07:00:10 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:05.724 07:00:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:05.724 { 00:04:05.724 "filename": "/tmp/spdk_mem_dump.txt" 00:04:05.724 } 00:04:05.724 07:00:10 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:05.724 07:00:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:05.724 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:05.724 1 heaps totaling size 818.000000 MiB 00:04:05.724 size: 818.000000 MiB heap id: 0 00:04:05.724 end heaps---------- 00:04:05.724 9 mempools totaling size 603.782043 MiB 00:04:05.724 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:05.724 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:05.724 size: 100.555481 MiB name: bdev_io_1000630 00:04:05.724 size: 50.003479 MiB name: msgpool_1000630 00:04:05.724 size: 36.509338 MiB name: fsdev_io_1000630 00:04:05.724 size: 21.763794 MiB name: PDU_Pool 00:04:05.724 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:05.724 size: 4.133484 MiB name: evtpool_1000630 00:04:05.724 size: 0.026123 MiB name: Session_Pool 00:04:05.724 end mempools------- 00:04:05.724 6 memzones totaling size 4.142822 MiB 00:04:05.724 size: 1.000366 MiB name: RG_ring_0_1000630 00:04:05.724 size: 1.000366 MiB name: RG_ring_1_1000630 00:04:05.724 size: 1.000366 MiB name: RG_ring_4_1000630 00:04:05.724 size: 1.000366 MiB name: RG_ring_5_1000630 00:04:05.724 size: 0.125366 MiB name: RG_ring_2_1000630 00:04:05.724 size: 0.015991 MiB name: RG_ring_3_1000630 00:04:05.724 end memzones------- 00:04:05.724 07:00:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:05.724 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:05.724 list of free elements. size: 10.852478 MiB 00:04:05.724 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:05.724 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:05.724 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:05.724 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:05.724 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:05.724 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:05.724 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:05.724 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:05.724 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:04:05.724 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:05.724 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:05.724 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:05.724 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:05.724 element at address: 0x200028200000 with size: 0.410034 MiB 00:04:05.724 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:05.724 list of standard malloc elements. size: 199.218628 MiB 00:04:05.724 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:05.724 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:05.724 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:05.724 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:05.724 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:05.724 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:05.724 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:05.724 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:05.724 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:05.724 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:05.724 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:05.724 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:05.724 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:05.724 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:05.724 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:05.724 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:05.724 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:05.724 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:05.724 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:05.724 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:05.724 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:05.724 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:05.724 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:05.724 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:05.724 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:05.724 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:05.724 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:05.724 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:05.724 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:05.724 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:05.724 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:05.724 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:05.724 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:05.724 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:05.724 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:05.724 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:05.724 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:05.724 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:05.724 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:05.724 element at address: 0x200028268f80 with size: 0.000183 MiB 00:04:05.724 element at address: 0x200028269040 with size: 0.000183 MiB 00:04:05.724 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:04:05.724 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:05.724 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:05.724 list of memzone associated elements. size: 607.928894 MiB 00:04:05.724 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:05.724 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:05.724 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:05.724 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:05.724 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:05.724 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_1000630_0 00:04:05.724 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:05.724 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1000630_0 00:04:05.724 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:05.724 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1000630_0 00:04:05.724 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:05.724 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:05.724 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:05.724 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:05.724 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:05.724 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1000630_0 00:04:05.724 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:05.724 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1000630 00:04:05.724 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:05.724 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1000630 00:04:05.724 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:05.724 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:05.724 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:05.724 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:05.724 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:05.724 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:05.724 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:05.724 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:05.724 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:05.724 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1000630 00:04:05.724 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:05.724 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1000630 00:04:05.724 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:05.724 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1000630 00:04:05.724 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:05.724 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1000630 00:04:05.725 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:05.725 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1000630 00:04:05.725 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:05.725 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1000630 00:04:05.725 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:05.725 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:05.725 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:05.725 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:05.725 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:05.725 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:05.725 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:05.725 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1000630 00:04:05.725 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:05.725 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1000630 00:04:05.725 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:05.725 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:05.725 element at address: 0x200028269100 with size: 0.023743 MiB 00:04:05.725 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:05.725 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:05.725 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1000630 00:04:05.725 element at address: 0x20002826f240 with size: 0.002441 MiB 00:04:05.725 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:05.725 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:05.725 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1000630 00:04:05.725 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:05.725 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1000630 00:04:05.725 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:05.725 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1000630 00:04:05.725 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:04:05.725 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:05.725 07:00:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:05.725 07:00:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1000630 00:04:05.725 07:00:10 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 1000630 ']' 00:04:05.725 07:00:10 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 1000630 00:04:05.725 07:00:10 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:04:05.725 07:00:10 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:05.725 07:00:10 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1000630 00:04:05.725 07:00:10 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:05.725 07:00:10 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:05.725 07:00:10 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1000630' 00:04:05.725 killing process with pid 1000630 00:04:05.725 07:00:10 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 1000630 00:04:05.725 07:00:10 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 1000630 00:04:05.983 00:04:05.983 real 0m1.013s 00:04:05.983 user 0m0.945s 00:04:05.983 sys 0m0.406s 00:04:05.983 07:00:10 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:05.983 07:00:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:05.983 ************************************ 00:04:05.983 END TEST dpdk_mem_utility 00:04:05.983 ************************************ 00:04:05.983 07:00:10 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:05.983 07:00:10 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:05.983 07:00:10 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:05.983 07:00:10 -- common/autotest_common.sh@10 -- # set +x 00:04:06.242 ************************************ 00:04:06.242 START TEST event 00:04:06.242 ************************************ 00:04:06.242 07:00:10 event -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:06.242 * Looking for test storage... 00:04:06.242 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:06.242 07:00:10 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:06.242 07:00:10 event -- common/autotest_common.sh@1691 -- # lcov --version 00:04:06.242 07:00:10 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:06.242 07:00:10 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:06.242 07:00:10 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:06.242 07:00:10 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:06.242 07:00:10 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:06.242 07:00:10 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:06.242 07:00:10 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:06.242 07:00:10 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:06.242 07:00:10 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:06.242 07:00:10 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:06.242 07:00:10 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:06.242 07:00:10 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:06.242 07:00:10 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:06.242 07:00:10 event -- scripts/common.sh@344 -- # case "$op" in 00:04:06.242 07:00:10 event -- scripts/common.sh@345 -- # : 1 00:04:06.242 07:00:10 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:06.242 07:00:10 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:06.242 07:00:10 event -- scripts/common.sh@365 -- # decimal 1 00:04:06.242 07:00:10 event -- scripts/common.sh@353 -- # local d=1 00:04:06.242 07:00:10 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:06.242 07:00:10 event -- scripts/common.sh@355 -- # echo 1 00:04:06.242 07:00:10 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:06.242 07:00:10 event -- scripts/common.sh@366 -- # decimal 2 00:04:06.242 07:00:10 event -- scripts/common.sh@353 -- # local d=2 00:04:06.242 07:00:10 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:06.242 07:00:10 event -- scripts/common.sh@355 -- # echo 2 00:04:06.242 07:00:10 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:06.242 07:00:10 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:06.242 07:00:10 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:06.242 07:00:10 event -- scripts/common.sh@368 -- # return 0 00:04:06.242 07:00:10 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:06.242 07:00:10 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:06.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.242 --rc genhtml_branch_coverage=1 00:04:06.242 --rc genhtml_function_coverage=1 00:04:06.242 --rc genhtml_legend=1 00:04:06.242 --rc geninfo_all_blocks=1 00:04:06.242 --rc geninfo_unexecuted_blocks=1 00:04:06.242 00:04:06.242 ' 00:04:06.242 07:00:10 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:06.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.242 --rc genhtml_branch_coverage=1 00:04:06.242 --rc genhtml_function_coverage=1 00:04:06.242 --rc genhtml_legend=1 00:04:06.242 --rc geninfo_all_blocks=1 00:04:06.242 --rc geninfo_unexecuted_blocks=1 00:04:06.242 00:04:06.242 ' 00:04:06.242 07:00:10 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:06.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.242 --rc genhtml_branch_coverage=1 00:04:06.242 --rc genhtml_function_coverage=1 00:04:06.242 --rc genhtml_legend=1 00:04:06.242 --rc geninfo_all_blocks=1 00:04:06.242 --rc geninfo_unexecuted_blocks=1 00:04:06.242 00:04:06.242 ' 00:04:06.242 07:00:10 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:06.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.242 --rc genhtml_branch_coverage=1 00:04:06.242 --rc genhtml_function_coverage=1 00:04:06.242 --rc genhtml_legend=1 00:04:06.242 --rc geninfo_all_blocks=1 00:04:06.242 --rc geninfo_unexecuted_blocks=1 00:04:06.242 00:04:06.242 ' 00:04:06.242 07:00:10 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:06.242 07:00:10 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:06.242 07:00:10 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:06.242 07:00:10 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:04:06.242 07:00:10 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:06.242 07:00:10 event -- common/autotest_common.sh@10 -- # set +x 00:04:06.242 ************************************ 00:04:06.242 START TEST event_perf 00:04:06.242 ************************************ 00:04:06.242 07:00:10 event.event_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:06.242 Running I/O for 1 seconds...[2024-11-20 07:00:10.786050] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:04:06.242 [2024-11-20 07:00:10.786123] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1000893 ] 00:04:06.502 [2024-11-20 07:00:10.866129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:06.502 [2024-11-20 07:00:10.911878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:06.502 [2024-11-20 07:00:10.911990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:06.502 [2024-11-20 07:00:10.912039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.502 [2024-11-20 07:00:10.912040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:07.437 Running I/O for 1 seconds... 00:04:07.437 lcore 0: 204369 00:04:07.437 lcore 1: 204368 00:04:07.437 lcore 2: 204370 00:04:07.437 lcore 3: 204369 00:04:07.437 done. 00:04:07.437 00:04:07.437 real 0m1.187s 00:04:07.438 user 0m4.105s 00:04:07.438 sys 0m0.078s 00:04:07.438 07:00:11 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:07.438 07:00:11 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:07.438 ************************************ 00:04:07.438 END TEST event_perf 00:04:07.438 ************************************ 00:04:07.438 07:00:11 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:07.438 07:00:11 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:07.438 07:00:11 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:07.438 07:00:11 event -- common/autotest_common.sh@10 -- # set +x 00:04:07.697 ************************************ 00:04:07.697 START TEST event_reactor 00:04:07.697 ************************************ 00:04:07.697 07:00:12 event.event_reactor -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:07.697 [2024-11-20 07:00:12.047208] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:04:07.697 [2024-11-20 07:00:12.047273] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1001145 ] 00:04:07.697 [2024-11-20 07:00:12.125817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.697 [2024-11-20 07:00:12.167484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.075 test_start 00:04:09.075 oneshot 00:04:09.075 tick 100 00:04:09.075 tick 100 00:04:09.075 tick 250 00:04:09.075 tick 100 00:04:09.075 tick 100 00:04:09.075 tick 100 00:04:09.075 tick 250 00:04:09.075 tick 500 00:04:09.075 tick 100 00:04:09.075 tick 100 00:04:09.075 tick 250 00:04:09.075 tick 100 00:04:09.075 tick 100 00:04:09.075 test_end 00:04:09.075 00:04:09.075 real 0m1.182s 00:04:09.075 user 0m1.103s 00:04:09.075 sys 0m0.075s 00:04:09.075 07:00:13 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:09.075 07:00:13 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:09.075 ************************************ 00:04:09.075 END TEST event_reactor 00:04:09.075 ************************************ 00:04:09.075 07:00:13 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:09.075 07:00:13 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:09.075 07:00:13 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:09.075 07:00:13 event -- common/autotest_common.sh@10 -- # set +x 00:04:09.075 ************************************ 00:04:09.075 START TEST event_reactor_perf 00:04:09.075 ************************************ 00:04:09.075 07:00:13 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:09.075 [2024-11-20 07:00:13.300755] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:04:09.075 [2024-11-20 07:00:13.300828] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1001397 ] 00:04:09.075 [2024-11-20 07:00:13.378931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.075 [2024-11-20 07:00:13.418854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.011 test_start 00:04:10.011 test_end 00:04:10.011 Performance: 504724 events per second 00:04:10.011 00:04:10.011 real 0m1.176s 00:04:10.011 user 0m1.093s 00:04:10.011 sys 0m0.078s 00:04:10.011 07:00:14 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:10.011 07:00:14 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:10.011 ************************************ 00:04:10.011 END TEST event_reactor_perf 00:04:10.011 ************************************ 00:04:10.011 07:00:14 event -- event/event.sh@49 -- # uname -s 00:04:10.011 07:00:14 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:10.011 07:00:14 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:10.011 07:00:14 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:10.011 07:00:14 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:10.011 07:00:14 event -- common/autotest_common.sh@10 -- # set +x 00:04:10.011 ************************************ 00:04:10.011 START TEST event_scheduler 00:04:10.011 ************************************ 00:04:10.011 07:00:14 event.event_scheduler -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:10.270 * Looking for test storage... 00:04:10.270 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:10.270 07:00:14 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:10.270 07:00:14 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:04:10.270 07:00:14 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:10.270 07:00:14 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:10.270 07:00:14 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:10.270 07:00:14 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:10.270 07:00:14 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:10.270 07:00:14 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:10.270 07:00:14 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:10.270 07:00:14 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:10.270 07:00:14 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:10.270 07:00:14 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:10.270 07:00:14 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:10.270 07:00:14 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:10.270 07:00:14 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:10.270 07:00:14 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:10.270 07:00:14 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:10.270 07:00:14 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:10.270 07:00:14 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:10.270 07:00:14 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:10.270 07:00:14 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:10.270 07:00:14 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:10.270 07:00:14 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:10.270 07:00:14 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:10.270 07:00:14 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:10.270 07:00:14 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:10.270 07:00:14 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:10.270 07:00:14 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:10.270 07:00:14 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:10.270 07:00:14 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:10.270 07:00:14 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:10.270 07:00:14 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:10.270 07:00:14 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:10.270 07:00:14 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:10.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.270 --rc genhtml_branch_coverage=1 00:04:10.270 --rc genhtml_function_coverage=1 00:04:10.270 --rc genhtml_legend=1 00:04:10.270 --rc geninfo_all_blocks=1 00:04:10.270 --rc geninfo_unexecuted_blocks=1 00:04:10.270 00:04:10.270 ' 00:04:10.270 07:00:14 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:10.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.270 --rc genhtml_branch_coverage=1 00:04:10.270 --rc genhtml_function_coverage=1 00:04:10.270 --rc genhtml_legend=1 00:04:10.270 --rc geninfo_all_blocks=1 00:04:10.270 --rc geninfo_unexecuted_blocks=1 00:04:10.270 00:04:10.270 ' 00:04:10.270 07:00:14 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:10.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.270 --rc genhtml_branch_coverage=1 00:04:10.270 --rc genhtml_function_coverage=1 00:04:10.270 --rc genhtml_legend=1 00:04:10.270 --rc geninfo_all_blocks=1 00:04:10.270 --rc geninfo_unexecuted_blocks=1 00:04:10.270 00:04:10.270 ' 00:04:10.270 07:00:14 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:10.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.270 --rc genhtml_branch_coverage=1 00:04:10.270 --rc genhtml_function_coverage=1 00:04:10.270 --rc genhtml_legend=1 00:04:10.270 --rc geninfo_all_blocks=1 00:04:10.270 --rc geninfo_unexecuted_blocks=1 00:04:10.270 00:04:10.270 ' 00:04:10.270 07:00:14 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:10.270 07:00:14 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1001682 00:04:10.270 07:00:14 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:10.270 07:00:14 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:10.270 07:00:14 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1001682 00:04:10.270 07:00:14 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 1001682 ']' 00:04:10.270 07:00:14 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:10.270 07:00:14 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:10.270 07:00:14 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:10.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:10.270 07:00:14 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:10.270 07:00:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:10.270 [2024-11-20 07:00:14.753894] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:04:10.270 [2024-11-20 07:00:14.753937] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1001682 ] 00:04:10.529 [2024-11-20 07:00:14.826151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:10.529 [2024-11-20 07:00:14.872011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.529 [2024-11-20 07:00:14.872122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:10.529 [2024-11-20 07:00:14.872228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:10.529 [2024-11-20 07:00:14.872228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:10.529 07:00:14 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:10.529 07:00:14 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:04:10.529 07:00:14 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:10.529 07:00:14 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.529 07:00:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:10.529 [2024-11-20 07:00:14.916722] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:10.529 [2024-11-20 07:00:14.916737] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:10.529 [2024-11-20 07:00:14.916747] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:10.529 [2024-11-20 07:00:14.916753] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:10.529 [2024-11-20 07:00:14.916758] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:10.529 07:00:14 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.529 07:00:14 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:10.529 07:00:14 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.529 07:00:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:10.530 [2024-11-20 07:00:14.992563] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:10.530 07:00:14 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.530 07:00:14 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:10.530 07:00:14 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:10.530 07:00:14 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:10.530 07:00:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:10.530 ************************************ 00:04:10.530 START TEST scheduler_create_thread 00:04:10.530 ************************************ 00:04:10.530 07:00:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:04:10.530 07:00:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:10.530 07:00:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.530 07:00:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.530 2 00:04:10.530 07:00:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.530 07:00:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:10.530 07:00:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.530 07:00:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.530 3 00:04:10.530 07:00:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.530 07:00:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:10.530 07:00:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.530 07:00:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.530 4 00:04:10.530 07:00:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.530 07:00:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:10.530 07:00:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.530 07:00:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.530 5 00:04:10.530 07:00:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.530 07:00:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:10.530 07:00:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.530 07:00:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.530 6 00:04:10.530 07:00:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.530 07:00:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:10.530 07:00:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.530 07:00:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.788 7 00:04:10.788 07:00:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.788 07:00:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:10.788 07:00:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.788 07:00:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.788 8 00:04:10.788 07:00:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.788 07:00:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:10.788 07:00:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.788 07:00:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.788 9 00:04:10.788 07:00:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.788 07:00:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:10.788 07:00:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.788 07:00:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.788 10 00:04:10.788 07:00:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.788 07:00:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:10.788 07:00:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.788 07:00:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.788 07:00:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.788 07:00:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:10.788 07:00:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:10.788 07:00:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.788 07:00:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:10.788 07:00:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:10.788 07:00:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:10.788 07:00:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:10.788 07:00:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:12.164 07:00:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:12.164 07:00:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:12.164 07:00:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:12.164 07:00:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:12.165 07:00:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:13.101 07:00:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:13.101 00:04:13.101 real 0m2.620s 00:04:13.101 user 0m0.025s 00:04:13.101 sys 0m0.005s 00:04:13.101 07:00:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:13.101 07:00:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:13.101 ************************************ 00:04:13.101 END TEST scheduler_create_thread 00:04:13.101 ************************************ 00:04:13.359 07:00:17 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:13.359 07:00:17 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1001682 00:04:13.359 07:00:17 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 1001682 ']' 00:04:13.359 07:00:17 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 1001682 00:04:13.359 07:00:17 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:04:13.360 07:00:17 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:13.360 07:00:17 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1001682 00:04:13.360 07:00:17 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:04:13.360 07:00:17 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:04:13.360 07:00:17 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1001682' 00:04:13.360 killing process with pid 1001682 00:04:13.360 07:00:17 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 1001682 00:04:13.360 07:00:17 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 1001682 00:04:13.618 [2024-11-20 07:00:18.126871] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:13.877 00:04:13.877 real 0m3.761s 00:04:13.877 user 0m5.643s 00:04:13.877 sys 0m0.355s 00:04:13.877 07:00:18 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:13.877 07:00:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:13.877 ************************************ 00:04:13.877 END TEST event_scheduler 00:04:13.877 ************************************ 00:04:13.877 07:00:18 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:13.877 07:00:18 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:13.877 07:00:18 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:13.877 07:00:18 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:13.877 07:00:18 event -- common/autotest_common.sh@10 -- # set +x 00:04:13.877 ************************************ 00:04:13.877 START TEST app_repeat 00:04:13.877 ************************************ 00:04:13.877 07:00:18 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:04:13.877 07:00:18 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:13.877 07:00:18 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:13.877 07:00:18 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:13.877 07:00:18 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:13.877 07:00:18 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:13.877 07:00:18 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:13.877 07:00:18 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:13.877 07:00:18 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1002411 00:04:13.878 07:00:18 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:13.878 07:00:18 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:13.878 07:00:18 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1002411' 00:04:13.878 Process app_repeat pid: 1002411 00:04:13.878 07:00:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:13.878 07:00:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:13.878 spdk_app_start Round 0 00:04:13.878 07:00:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1002411 /var/tmp/spdk-nbd.sock 00:04:13.878 07:00:18 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 1002411 ']' 00:04:13.878 07:00:18 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:13.878 07:00:18 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:13.878 07:00:18 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:13.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:13.878 07:00:18 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:13.878 07:00:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:13.878 [2024-11-20 07:00:18.409839] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:04:13.878 [2024-11-20 07:00:18.409893] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1002411 ] 00:04:14.137 [2024-11-20 07:00:18.486704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:14.137 [2024-11-20 07:00:18.528232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:14.137 [2024-11-20 07:00:18.528233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.137 07:00:18 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:14.137 07:00:18 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:14.137 07:00:18 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:14.395 Malloc0 00:04:14.395 07:00:18 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:14.654 Malloc1 00:04:14.654 07:00:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:14.654 07:00:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:14.654 07:00:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:14.654 07:00:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:14.654 07:00:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:14.654 07:00:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:14.654 07:00:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:14.654 07:00:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:14.654 07:00:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:14.654 07:00:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:14.654 07:00:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:14.654 07:00:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:14.654 07:00:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:14.654 07:00:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:14.654 07:00:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:14.654 07:00:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:14.913 /dev/nbd0 00:04:14.913 07:00:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:14.913 07:00:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:14.913 07:00:19 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:14.913 07:00:19 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:14.913 07:00:19 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:14.913 07:00:19 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:14.913 07:00:19 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:14.913 07:00:19 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:14.913 07:00:19 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:14.913 07:00:19 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:14.913 07:00:19 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:14.913 1+0 records in 00:04:14.913 1+0 records out 00:04:14.913 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210563 s, 19.5 MB/s 00:04:14.913 07:00:19 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:14.913 07:00:19 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:14.913 07:00:19 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:14.913 07:00:19 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:14.913 07:00:19 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:14.913 07:00:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:14.913 07:00:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:14.913 07:00:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:15.172 /dev/nbd1 00:04:15.172 07:00:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:15.172 07:00:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:15.172 07:00:19 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:15.172 07:00:19 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:15.172 07:00:19 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:15.172 07:00:19 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:15.172 07:00:19 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:15.172 07:00:19 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:15.172 07:00:19 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:15.172 07:00:19 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:15.172 07:00:19 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:15.172 1+0 records in 00:04:15.172 1+0 records out 00:04:15.172 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021117 s, 19.4 MB/s 00:04:15.172 07:00:19 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:15.172 07:00:19 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:15.172 07:00:19 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:15.172 07:00:19 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:15.172 07:00:19 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:15.172 07:00:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:15.172 07:00:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:15.172 07:00:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:15.172 07:00:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:15.172 07:00:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:15.430 07:00:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:15.430 { 00:04:15.430 "nbd_device": "/dev/nbd0", 00:04:15.430 "bdev_name": "Malloc0" 00:04:15.430 }, 00:04:15.430 { 00:04:15.430 "nbd_device": "/dev/nbd1", 00:04:15.430 "bdev_name": "Malloc1" 00:04:15.430 } 00:04:15.430 ]' 00:04:15.430 07:00:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:15.430 { 00:04:15.430 "nbd_device": "/dev/nbd0", 00:04:15.430 "bdev_name": "Malloc0" 00:04:15.430 }, 00:04:15.430 { 00:04:15.430 "nbd_device": "/dev/nbd1", 00:04:15.430 "bdev_name": "Malloc1" 00:04:15.430 } 00:04:15.430 ]' 00:04:15.430 07:00:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:15.430 07:00:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:15.430 /dev/nbd1' 00:04:15.430 07:00:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:15.430 /dev/nbd1' 00:04:15.430 07:00:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:15.430 07:00:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:15.430 07:00:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:15.430 07:00:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:15.430 07:00:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:15.430 07:00:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:15.430 07:00:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:15.430 07:00:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:15.430 07:00:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:15.430 07:00:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:15.430 07:00:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:15.430 07:00:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:15.430 256+0 records in 00:04:15.430 256+0 records out 00:04:15.430 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107919 s, 97.2 MB/s 00:04:15.430 07:00:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:15.430 07:00:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:15.430 256+0 records in 00:04:15.430 256+0 records out 00:04:15.430 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140831 s, 74.5 MB/s 00:04:15.431 07:00:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:15.431 07:00:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:15.431 256+0 records in 00:04:15.431 256+0 records out 00:04:15.431 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0151903 s, 69.0 MB/s 00:04:15.431 07:00:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:15.431 07:00:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:15.431 07:00:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:15.431 07:00:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:15.431 07:00:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:15.431 07:00:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:15.431 07:00:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:15.431 07:00:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:15.431 07:00:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:15.431 07:00:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:15.431 07:00:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:15.431 07:00:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:15.431 07:00:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:15.431 07:00:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:15.431 07:00:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:15.431 07:00:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:15.431 07:00:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:15.431 07:00:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:15.431 07:00:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:15.689 07:00:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:15.689 07:00:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:15.689 07:00:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:15.689 07:00:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:15.689 07:00:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:15.689 07:00:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:15.689 07:00:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:15.689 07:00:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:15.689 07:00:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:15.689 07:00:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:15.948 07:00:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:15.948 07:00:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:15.948 07:00:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:15.948 07:00:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:15.948 07:00:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:15.948 07:00:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:15.948 07:00:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:15.948 07:00:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:15.948 07:00:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:15.948 07:00:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:15.948 07:00:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:15.948 07:00:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:15.948 07:00:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:15.948 07:00:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:16.206 07:00:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:16.206 07:00:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:16.206 07:00:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:16.206 07:00:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:16.206 07:00:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:16.206 07:00:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:16.206 07:00:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:16.206 07:00:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:16.206 07:00:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:16.206 07:00:20 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:16.464 07:00:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:16.464 [2024-11-20 07:00:20.902656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:16.464 [2024-11-20 07:00:20.940647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:16.464 [2024-11-20 07:00:20.940649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.464 [2024-11-20 07:00:20.981664] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:16.464 [2024-11-20 07:00:20.981704] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:19.748 07:00:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:19.748 07:00:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:19.748 spdk_app_start Round 1 00:04:19.748 07:00:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1002411 /var/tmp/spdk-nbd.sock 00:04:19.748 07:00:23 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 1002411 ']' 00:04:19.748 07:00:23 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:19.748 07:00:23 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:19.748 07:00:23 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:19.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:19.748 07:00:23 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:19.748 07:00:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:19.748 07:00:23 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:19.748 07:00:23 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:19.748 07:00:23 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:19.748 Malloc0 00:04:19.748 07:00:24 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:20.007 Malloc1 00:04:20.007 07:00:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:20.007 07:00:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:20.007 07:00:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:20.007 07:00:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:20.007 07:00:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:20.007 07:00:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:20.007 07:00:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:20.007 07:00:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:20.007 07:00:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:20.007 07:00:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:20.007 07:00:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:20.007 07:00:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:20.007 07:00:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:20.007 07:00:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:20.007 07:00:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:20.007 07:00:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:20.265 /dev/nbd0 00:04:20.265 07:00:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:20.265 07:00:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:20.265 07:00:24 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:20.265 07:00:24 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:20.265 07:00:24 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:20.265 07:00:24 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:20.265 07:00:24 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:20.265 07:00:24 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:20.266 07:00:24 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:20.266 07:00:24 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:20.266 07:00:24 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:20.266 1+0 records in 00:04:20.266 1+0 records out 00:04:20.266 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000142569 s, 28.7 MB/s 00:04:20.266 07:00:24 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:20.266 07:00:24 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:20.266 07:00:24 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:20.266 07:00:24 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:20.266 07:00:24 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:20.266 07:00:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:20.266 07:00:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:20.266 07:00:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:20.266 /dev/nbd1 00:04:20.525 07:00:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:20.525 07:00:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:20.525 07:00:24 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:20.525 07:00:24 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:20.525 07:00:24 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:20.525 07:00:24 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:20.525 07:00:24 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:20.525 07:00:24 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:20.525 07:00:24 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:20.525 07:00:24 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:20.525 07:00:24 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:20.525 1+0 records in 00:04:20.525 1+0 records out 00:04:20.525 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000144522 s, 28.3 MB/s 00:04:20.525 07:00:24 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:20.525 07:00:24 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:20.525 07:00:24 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:20.525 07:00:24 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:20.525 07:00:24 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:20.525 07:00:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:20.525 07:00:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:20.525 07:00:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:20.525 07:00:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:20.525 07:00:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:20.525 07:00:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:20.525 { 00:04:20.525 "nbd_device": "/dev/nbd0", 00:04:20.525 "bdev_name": "Malloc0" 00:04:20.525 }, 00:04:20.525 { 00:04:20.525 "nbd_device": "/dev/nbd1", 00:04:20.525 "bdev_name": "Malloc1" 00:04:20.525 } 00:04:20.525 ]' 00:04:20.525 07:00:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:20.525 { 00:04:20.525 "nbd_device": "/dev/nbd0", 00:04:20.525 "bdev_name": "Malloc0" 00:04:20.525 }, 00:04:20.525 { 00:04:20.525 "nbd_device": "/dev/nbd1", 00:04:20.525 "bdev_name": "Malloc1" 00:04:20.525 } 00:04:20.525 ]' 00:04:20.525 07:00:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:20.784 07:00:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:20.784 /dev/nbd1' 00:04:20.784 07:00:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:20.784 07:00:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:20.784 /dev/nbd1' 00:04:20.784 07:00:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:20.784 07:00:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:20.784 07:00:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:20.784 07:00:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:20.784 07:00:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:20.784 07:00:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:20.784 07:00:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:20.784 07:00:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:20.784 07:00:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:20.784 07:00:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:20.784 07:00:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:20.784 256+0 records in 00:04:20.784 256+0 records out 00:04:20.784 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00337512 s, 311 MB/s 00:04:20.784 07:00:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:20.784 07:00:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:20.784 256+0 records in 00:04:20.784 256+0 records out 00:04:20.784 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014234 s, 73.7 MB/s 00:04:20.784 07:00:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:20.784 07:00:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:20.784 256+0 records in 00:04:20.784 256+0 records out 00:04:20.784 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149186 s, 70.3 MB/s 00:04:20.784 07:00:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:20.784 07:00:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:20.784 07:00:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:20.784 07:00:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:20.784 07:00:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:20.784 07:00:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:20.784 07:00:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:20.784 07:00:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:20.784 07:00:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:20.784 07:00:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:20.784 07:00:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:20.784 07:00:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:20.784 07:00:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:20.784 07:00:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:20.784 07:00:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:20.784 07:00:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:20.784 07:00:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:20.784 07:00:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:20.784 07:00:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:21.043 07:00:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:21.043 07:00:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:21.043 07:00:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:21.043 07:00:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:21.043 07:00:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:21.043 07:00:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:21.043 07:00:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:21.043 07:00:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:21.043 07:00:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:21.043 07:00:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:21.043 07:00:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:21.043 07:00:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:21.043 07:00:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:21.043 07:00:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:21.043 07:00:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:21.043 07:00:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:21.043 07:00:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:21.043 07:00:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:21.043 07:00:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:21.043 07:00:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:21.043 07:00:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:21.301 07:00:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:21.301 07:00:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:21.301 07:00:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:21.301 07:00:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:21.301 07:00:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:21.301 07:00:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:21.301 07:00:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:21.301 07:00:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:21.301 07:00:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:21.301 07:00:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:21.301 07:00:25 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:21.301 07:00:25 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:21.301 07:00:25 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:21.561 07:00:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:21.819 [2024-11-20 07:00:26.185387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:21.819 [2024-11-20 07:00:26.223219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:21.819 [2024-11-20 07:00:26.223220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.819 [2024-11-20 07:00:26.265156] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:21.819 [2024-11-20 07:00:26.265199] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:25.108 07:00:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:25.108 07:00:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:25.108 spdk_app_start Round 2 00:04:25.108 07:00:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1002411 /var/tmp/spdk-nbd.sock 00:04:25.108 07:00:29 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 1002411 ']' 00:04:25.108 07:00:29 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:25.108 07:00:29 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:25.108 07:00:29 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:25.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:25.108 07:00:29 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:25.108 07:00:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:25.108 07:00:29 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:25.108 07:00:29 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:25.108 07:00:29 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:25.108 Malloc0 00:04:25.108 07:00:29 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:25.108 Malloc1 00:04:25.108 07:00:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:25.108 07:00:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.108 07:00:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:25.108 07:00:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:25.108 07:00:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.108 07:00:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:25.108 07:00:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:25.108 07:00:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.108 07:00:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:25.108 07:00:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:25.108 07:00:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.108 07:00:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:25.108 07:00:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:25.108 07:00:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:25.108 07:00:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:25.108 07:00:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:25.366 /dev/nbd0 00:04:25.366 07:00:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:25.366 07:00:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:25.366 07:00:29 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:25.366 07:00:29 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:25.366 07:00:29 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:25.366 07:00:29 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:25.366 07:00:29 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:25.366 07:00:29 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:25.366 07:00:29 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:25.366 07:00:29 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:25.366 07:00:29 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:25.366 1+0 records in 00:04:25.366 1+0 records out 00:04:25.366 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247796 s, 16.5 MB/s 00:04:25.366 07:00:29 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:25.366 07:00:29 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:25.366 07:00:29 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:25.367 07:00:29 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:25.367 07:00:29 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:25.367 07:00:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:25.367 07:00:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:25.367 07:00:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:25.624 /dev/nbd1 00:04:25.624 07:00:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:25.624 07:00:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:25.624 07:00:30 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:25.624 07:00:30 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:25.624 07:00:30 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:25.624 07:00:30 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:25.624 07:00:30 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:25.624 07:00:30 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:25.624 07:00:30 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:25.624 07:00:30 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:25.624 07:00:30 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:25.624 1+0 records in 00:04:25.624 1+0 records out 00:04:25.624 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215166 s, 19.0 MB/s 00:04:25.624 07:00:30 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:25.624 07:00:30 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:25.624 07:00:30 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:25.624 07:00:30 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:25.624 07:00:30 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:25.624 07:00:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:25.624 07:00:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:25.624 07:00:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:25.624 07:00:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.624 07:00:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:25.882 07:00:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:25.882 { 00:04:25.882 "nbd_device": "/dev/nbd0", 00:04:25.882 "bdev_name": "Malloc0" 00:04:25.882 }, 00:04:25.882 { 00:04:25.882 "nbd_device": "/dev/nbd1", 00:04:25.882 "bdev_name": "Malloc1" 00:04:25.882 } 00:04:25.882 ]' 00:04:25.882 07:00:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:25.882 { 00:04:25.882 "nbd_device": "/dev/nbd0", 00:04:25.882 "bdev_name": "Malloc0" 00:04:25.882 }, 00:04:25.882 { 00:04:25.882 "nbd_device": "/dev/nbd1", 00:04:25.882 "bdev_name": "Malloc1" 00:04:25.882 } 00:04:25.882 ]' 00:04:25.882 07:00:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:25.882 07:00:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:25.882 /dev/nbd1' 00:04:25.882 07:00:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:25.882 /dev/nbd1' 00:04:25.882 07:00:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:25.882 07:00:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:25.882 07:00:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:25.882 07:00:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:25.882 07:00:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:25.882 07:00:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:25.882 07:00:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.882 07:00:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:25.882 07:00:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:25.882 07:00:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:25.882 07:00:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:25.882 07:00:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:25.882 256+0 records in 00:04:25.882 256+0 records out 00:04:25.882 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103948 s, 101 MB/s 00:04:25.882 07:00:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:25.882 07:00:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:25.882 256+0 records in 00:04:25.882 256+0 records out 00:04:25.882 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142786 s, 73.4 MB/s 00:04:25.882 07:00:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:25.882 07:00:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:26.140 256+0 records in 00:04:26.140 256+0 records out 00:04:26.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0155595 s, 67.4 MB/s 00:04:26.140 07:00:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:26.140 07:00:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.140 07:00:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:26.140 07:00:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:26.140 07:00:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:26.140 07:00:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:26.140 07:00:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:26.140 07:00:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:26.140 07:00:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:26.140 07:00:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:26.140 07:00:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:26.140 07:00:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:26.140 07:00:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:26.140 07:00:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.140 07:00:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.140 07:00:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:26.140 07:00:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:26.140 07:00:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:26.140 07:00:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:26.140 07:00:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:26.140 07:00:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:26.140 07:00:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:26.140 07:00:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:26.140 07:00:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:26.140 07:00:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:26.140 07:00:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:26.140 07:00:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:26.140 07:00:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:26.140 07:00:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:26.399 07:00:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:26.399 07:00:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:26.399 07:00:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:26.399 07:00:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:26.399 07:00:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:26.399 07:00:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:26.399 07:00:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:26.399 07:00:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:26.399 07:00:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:26.399 07:00:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.399 07:00:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:26.658 07:00:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:26.658 07:00:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:26.658 07:00:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:26.658 07:00:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:26.658 07:00:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:26.658 07:00:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:26.658 07:00:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:26.658 07:00:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:26.658 07:00:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:26.658 07:00:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:26.658 07:00:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:26.658 07:00:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:26.658 07:00:31 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:26.916 07:00:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:27.175 [2024-11-20 07:00:31.485556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:27.175 [2024-11-20 07:00:31.523811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:27.175 [2024-11-20 07:00:31.523812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.175 [2024-11-20 07:00:31.565143] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:27.175 [2024-11-20 07:00:31.565185] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:30.457 07:00:34 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1002411 /var/tmp/spdk-nbd.sock 00:04:30.457 07:00:34 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 1002411 ']' 00:04:30.457 07:00:34 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:30.457 07:00:34 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:30.457 07:00:34 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:30.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:30.457 07:00:34 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:30.457 07:00:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:30.457 07:00:34 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:30.457 07:00:34 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:30.457 07:00:34 event.app_repeat -- event/event.sh@39 -- # killprocess 1002411 00:04:30.457 07:00:34 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 1002411 ']' 00:04:30.457 07:00:34 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 1002411 00:04:30.457 07:00:34 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:04:30.457 07:00:34 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:30.457 07:00:34 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1002411 00:04:30.457 07:00:34 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:30.457 07:00:34 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:30.457 07:00:34 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1002411' 00:04:30.457 killing process with pid 1002411 00:04:30.457 07:00:34 event.app_repeat -- common/autotest_common.sh@971 -- # kill 1002411 00:04:30.457 07:00:34 event.app_repeat -- common/autotest_common.sh@976 -- # wait 1002411 00:04:30.457 spdk_app_start is called in Round 0. 00:04:30.457 Shutdown signal received, stop current app iteration 00:04:30.457 Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 reinitialization... 00:04:30.457 spdk_app_start is called in Round 1. 00:04:30.457 Shutdown signal received, stop current app iteration 00:04:30.457 Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 reinitialization... 00:04:30.457 spdk_app_start is called in Round 2. 00:04:30.457 Shutdown signal received, stop current app iteration 00:04:30.457 Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 reinitialization... 00:04:30.457 spdk_app_start is called in Round 3. 00:04:30.457 Shutdown signal received, stop current app iteration 00:04:30.457 07:00:34 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:30.457 07:00:34 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:30.457 00:04:30.457 real 0m16.370s 00:04:30.457 user 0m35.963s 00:04:30.457 sys 0m2.527s 00:04:30.457 07:00:34 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:30.457 07:00:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:30.457 ************************************ 00:04:30.457 END TEST app_repeat 00:04:30.457 ************************************ 00:04:30.457 07:00:34 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:30.457 07:00:34 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:30.457 07:00:34 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:30.457 07:00:34 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:30.457 07:00:34 event -- common/autotest_common.sh@10 -- # set +x 00:04:30.457 ************************************ 00:04:30.457 START TEST cpu_locks 00:04:30.457 ************************************ 00:04:30.457 07:00:34 event.cpu_locks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:30.457 * Looking for test storage... 00:04:30.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:30.457 07:00:34 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:30.457 07:00:34 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:04:30.457 07:00:34 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:30.457 07:00:34 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:30.457 07:00:34 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.457 07:00:34 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.457 07:00:34 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.457 07:00:34 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.457 07:00:34 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.457 07:00:34 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.457 07:00:34 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.457 07:00:34 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.457 07:00:34 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.457 07:00:34 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.457 07:00:34 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.457 07:00:34 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:30.457 07:00:34 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:30.457 07:00:34 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.457 07:00:34 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.457 07:00:34 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:30.457 07:00:34 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:30.457 07:00:34 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.457 07:00:34 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:30.457 07:00:34 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.457 07:00:34 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:30.457 07:00:34 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:30.457 07:00:34 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.457 07:00:34 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:30.457 07:00:34 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.457 07:00:34 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.457 07:00:34 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.457 07:00:34 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:30.457 07:00:34 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.457 07:00:34 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:30.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.457 --rc genhtml_branch_coverage=1 00:04:30.457 --rc genhtml_function_coverage=1 00:04:30.457 --rc genhtml_legend=1 00:04:30.457 --rc geninfo_all_blocks=1 00:04:30.457 --rc geninfo_unexecuted_blocks=1 00:04:30.457 00:04:30.457 ' 00:04:30.457 07:00:34 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:30.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.457 --rc genhtml_branch_coverage=1 00:04:30.457 --rc genhtml_function_coverage=1 00:04:30.457 --rc genhtml_legend=1 00:04:30.457 --rc geninfo_all_blocks=1 00:04:30.457 --rc geninfo_unexecuted_blocks=1 00:04:30.457 00:04:30.457 ' 00:04:30.457 07:00:34 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:30.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.457 --rc genhtml_branch_coverage=1 00:04:30.457 --rc genhtml_function_coverage=1 00:04:30.457 --rc genhtml_legend=1 00:04:30.457 --rc geninfo_all_blocks=1 00:04:30.457 --rc geninfo_unexecuted_blocks=1 00:04:30.457 00:04:30.457 ' 00:04:30.457 07:00:34 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:30.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.457 --rc genhtml_branch_coverage=1 00:04:30.457 --rc genhtml_function_coverage=1 00:04:30.457 --rc genhtml_legend=1 00:04:30.457 --rc geninfo_all_blocks=1 00:04:30.457 --rc geninfo_unexecuted_blocks=1 00:04:30.457 00:04:30.457 ' 00:04:30.457 07:00:34 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:30.457 07:00:34 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:30.457 07:00:34 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:30.457 07:00:34 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:30.457 07:00:34 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:30.457 07:00:34 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:30.457 07:00:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:30.715 ************************************ 00:04:30.715 START TEST default_locks 00:04:30.715 ************************************ 00:04:30.715 07:00:35 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:04:30.715 07:00:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1005413 00:04:30.715 07:00:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1005413 00:04:30.715 07:00:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:30.715 07:00:35 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 1005413 ']' 00:04:30.715 07:00:35 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.715 07:00:35 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:30.715 07:00:35 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.715 07:00:35 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:30.715 07:00:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:30.715 [2024-11-20 07:00:35.072893] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:04:30.715 [2024-11-20 07:00:35.072942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1005413 ] 00:04:30.715 [2024-11-20 07:00:35.145613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.715 [2024-11-20 07:00:35.185599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.972 07:00:35 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:30.972 07:00:35 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:04:30.972 07:00:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1005413 00:04:30.973 07:00:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1005413 00:04:30.973 07:00:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:31.230 lslocks: write error 00:04:31.230 07:00:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1005413 00:04:31.230 07:00:35 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 1005413 ']' 00:04:31.230 07:00:35 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 1005413 00:04:31.230 07:00:35 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:04:31.230 07:00:35 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:31.230 07:00:35 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1005413 00:04:31.489 07:00:35 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:31.489 07:00:35 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:31.489 07:00:35 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1005413' 00:04:31.489 killing process with pid 1005413 00:04:31.489 07:00:35 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 1005413 00:04:31.489 07:00:35 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 1005413 00:04:31.748 07:00:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1005413 00:04:31.748 07:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:04:31.748 07:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1005413 00:04:31.748 07:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:31.748 07:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:31.748 07:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:31.748 07:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:31.748 07:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 1005413 00:04:31.748 07:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 1005413 ']' 00:04:31.748 07:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.748 07:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:31.748 07:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.748 07:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:31.748 07:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:31.748 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (1005413) - No such process 00:04:31.748 ERROR: process (pid: 1005413) is no longer running 00:04:31.748 07:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:31.748 07:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:04:31.748 07:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:04:31.748 07:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:31.748 07:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:31.748 07:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:31.748 07:00:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:31.748 07:00:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:31.748 07:00:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:31.748 07:00:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:31.748 00:04:31.748 real 0m1.093s 00:04:31.748 user 0m1.049s 00:04:31.748 sys 0m0.507s 00:04:31.748 07:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:31.748 07:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:31.748 ************************************ 00:04:31.748 END TEST default_locks 00:04:31.748 ************************************ 00:04:31.748 07:00:36 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:31.748 07:00:36 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:31.748 07:00:36 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:31.748 07:00:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:31.748 ************************************ 00:04:31.748 START TEST default_locks_via_rpc 00:04:31.748 ************************************ 00:04:31.748 07:00:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:04:31.748 07:00:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1005673 00:04:31.748 07:00:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1005673 00:04:31.748 07:00:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:31.748 07:00:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 1005673 ']' 00:04:31.748 07:00:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.748 07:00:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:31.748 07:00:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.748 07:00:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:31.748 07:00:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.748 [2024-11-20 07:00:36.230597] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:04:31.748 [2024-11-20 07:00:36.230637] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1005673 ] 00:04:32.007 [2024-11-20 07:00:36.306783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.007 [2024-11-20 07:00:36.349151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.266 07:00:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:32.266 07:00:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:32.266 07:00:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:32.266 07:00:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:32.266 07:00:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.266 07:00:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:32.266 07:00:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:32.266 07:00:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:32.266 07:00:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:32.266 07:00:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:32.266 07:00:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:32.266 07:00:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:32.266 07:00:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.266 07:00:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:32.266 07:00:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1005673 00:04:32.266 07:00:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1005673 00:04:32.266 07:00:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:32.266 07:00:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1005673 00:04:32.266 07:00:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 1005673 ']' 00:04:32.266 07:00:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 1005673 00:04:32.266 07:00:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:04:32.266 07:00:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:32.266 07:00:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1005673 00:04:32.524 07:00:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:32.524 07:00:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:32.524 07:00:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1005673' 00:04:32.524 killing process with pid 1005673 00:04:32.524 07:00:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 1005673 00:04:32.524 07:00:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 1005673 00:04:32.782 00:04:32.782 real 0m0.984s 00:04:32.782 user 0m0.936s 00:04:32.782 sys 0m0.450s 00:04:32.782 07:00:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:32.782 07:00:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.783 ************************************ 00:04:32.783 END TEST default_locks_via_rpc 00:04:32.783 ************************************ 00:04:32.783 07:00:37 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:32.783 07:00:37 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:32.783 07:00:37 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:32.783 07:00:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:32.783 ************************************ 00:04:32.783 START TEST non_locking_app_on_locked_coremask 00:04:32.783 ************************************ 00:04:32.783 07:00:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:04:32.783 07:00:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1005880 00:04:32.783 07:00:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1005880 /var/tmp/spdk.sock 00:04:32.783 07:00:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:32.783 07:00:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 1005880 ']' 00:04:32.783 07:00:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.783 07:00:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:32.783 07:00:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.783 07:00:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:32.783 07:00:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:32.783 [2024-11-20 07:00:37.280671] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:04:32.783 [2024-11-20 07:00:37.280711] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1005880 ] 00:04:33.041 [2024-11-20 07:00:37.355554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.041 [2024-11-20 07:00:37.398250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.298 07:00:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:33.298 07:00:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:33.298 07:00:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1005931 00:04:33.298 07:00:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1005931 /var/tmp/spdk2.sock 00:04:33.298 07:00:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:33.298 07:00:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 1005931 ']' 00:04:33.298 07:00:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:33.298 07:00:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:33.298 07:00:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:33.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:33.298 07:00:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:33.298 07:00:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:33.298 [2024-11-20 07:00:37.665361] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:04:33.298 [2024-11-20 07:00:37.665408] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1005931 ] 00:04:33.298 [2024-11-20 07:00:37.753043] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:33.298 [2024-11-20 07:00:37.753067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.298 [2024-11-20 07:00:37.840295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.234 07:00:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:34.234 07:00:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:34.234 07:00:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1005880 00:04:34.234 07:00:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1005880 00:04:34.234 07:00:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:34.491 lslocks: write error 00:04:34.491 07:00:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1005880 00:04:34.491 07:00:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 1005880 ']' 00:04:34.491 07:00:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 1005880 00:04:34.491 07:00:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:34.491 07:00:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:34.491 07:00:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1005880 00:04:34.750 07:00:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:34.750 07:00:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:34.750 07:00:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1005880' 00:04:34.750 killing process with pid 1005880 00:04:34.750 07:00:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 1005880 00:04:34.750 07:00:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 1005880 00:04:35.316 07:00:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1005931 00:04:35.317 07:00:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 1005931 ']' 00:04:35.317 07:00:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 1005931 00:04:35.317 07:00:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:35.317 07:00:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:35.317 07:00:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1005931 00:04:35.317 07:00:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:35.317 07:00:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:35.317 07:00:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1005931' 00:04:35.317 killing process with pid 1005931 00:04:35.317 07:00:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 1005931 00:04:35.317 07:00:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 1005931 00:04:35.575 00:04:35.575 real 0m2.796s 00:04:35.575 user 0m2.938s 00:04:35.575 sys 0m0.930s 00:04:35.575 07:00:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:35.575 07:00:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:35.575 ************************************ 00:04:35.575 END TEST non_locking_app_on_locked_coremask 00:04:35.575 ************************************ 00:04:35.575 07:00:40 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:35.575 07:00:40 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:35.575 07:00:40 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:35.575 07:00:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:35.575 ************************************ 00:04:35.575 START TEST locking_app_on_unlocked_coremask 00:04:35.575 ************************************ 00:04:35.575 07:00:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:04:35.575 07:00:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1006425 00:04:35.575 07:00:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1006425 /var/tmp/spdk.sock 00:04:35.575 07:00:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:35.575 07:00:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 1006425 ']' 00:04:35.575 07:00:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.575 07:00:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:35.575 07:00:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.575 07:00:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:35.575 07:00:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:35.833 [2024-11-20 07:00:40.141883] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:04:35.833 [2024-11-20 07:00:40.141923] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1006425 ] 00:04:35.833 [2024-11-20 07:00:40.218860] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:35.833 [2024-11-20 07:00:40.218882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.833 [2024-11-20 07:00:40.261088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.770 07:00:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:36.770 07:00:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:36.770 07:00:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:36.770 07:00:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1006444 00:04:36.770 07:00:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1006444 /var/tmp/spdk2.sock 00:04:36.770 07:00:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 1006444 ']' 00:04:36.770 07:00:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:36.770 07:00:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:36.770 07:00:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:36.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:36.770 07:00:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:36.770 07:00:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:36.770 [2024-11-20 07:00:41.002985] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:04:36.770 [2024-11-20 07:00:41.003032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1006444 ] 00:04:36.770 [2024-11-20 07:00:41.096218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.770 [2024-11-20 07:00:41.185335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.338 07:00:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:37.338 07:00:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:37.338 07:00:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1006444 00:04:37.338 07:00:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1006444 00:04:37.338 07:00:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:38.273 lslocks: write error 00:04:38.273 07:00:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1006425 00:04:38.273 07:00:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 1006425 ']' 00:04:38.273 07:00:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 1006425 00:04:38.273 07:00:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:38.273 07:00:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:38.273 07:00:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1006425 00:04:38.273 07:00:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:38.273 07:00:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:38.273 07:00:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1006425' 00:04:38.273 killing process with pid 1006425 00:04:38.273 07:00:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 1006425 00:04:38.273 07:00:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 1006425 00:04:38.843 07:00:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1006444 00:04:38.843 07:00:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 1006444 ']' 00:04:38.843 07:00:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 1006444 00:04:38.843 07:00:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:38.843 07:00:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:38.843 07:00:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1006444 00:04:38.843 07:00:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:38.843 07:00:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:38.843 07:00:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1006444' 00:04:38.843 killing process with pid 1006444 00:04:38.843 07:00:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 1006444 00:04:38.843 07:00:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 1006444 00:04:39.103 00:04:39.103 real 0m3.451s 00:04:39.103 user 0m3.767s 00:04:39.103 sys 0m0.985s 00:04:39.103 07:00:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:39.103 07:00:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:39.103 ************************************ 00:04:39.103 END TEST locking_app_on_unlocked_coremask 00:04:39.103 ************************************ 00:04:39.103 07:00:43 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:39.103 07:00:43 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:39.103 07:00:43 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:39.103 07:00:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:39.103 ************************************ 00:04:39.103 START TEST locking_app_on_locked_coremask 00:04:39.103 ************************************ 00:04:39.103 07:00:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:04:39.103 07:00:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1006932 00:04:39.103 07:00:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1006932 /var/tmp/spdk.sock 00:04:39.103 07:00:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:39.103 07:00:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 1006932 ']' 00:04:39.103 07:00:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.103 07:00:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:39.103 07:00:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.103 07:00:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:39.103 07:00:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:39.362 [2024-11-20 07:00:43.666485] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:04:39.363 [2024-11-20 07:00:43.666532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1006932 ] 00:04:39.363 [2024-11-20 07:00:43.740798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.363 [2024-11-20 07:00:43.781189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.622 07:00:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:39.622 07:00:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:39.622 07:00:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1007042 00:04:39.622 07:00:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1007042 /var/tmp/spdk2.sock 00:04:39.622 07:00:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:39.622 07:00:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:04:39.622 07:00:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1007042 /var/tmp/spdk2.sock 00:04:39.622 07:00:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:39.622 07:00:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:39.622 07:00:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:39.622 07:00:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:39.622 07:00:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1007042 /var/tmp/spdk2.sock 00:04:39.622 07:00:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 1007042 ']' 00:04:39.622 07:00:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:39.622 07:00:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:39.622 07:00:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:39.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:39.622 07:00:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:39.622 07:00:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:39.622 [2024-11-20 07:00:44.061329] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:04:39.622 [2024-11-20 07:00:44.061378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1007042 ] 00:04:39.622 [2024-11-20 07:00:44.151731] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1006932 has claimed it. 00:04:39.622 [2024-11-20 07:00:44.151773] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:40.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (1007042) - No such process 00:04:40.189 ERROR: process (pid: 1007042) is no longer running 00:04:40.189 07:00:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:40.189 07:00:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:04:40.189 07:00:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:04:40.189 07:00:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:40.189 07:00:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:40.189 07:00:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:40.189 07:00:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1006932 00:04:40.189 07:00:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1006932 00:04:40.189 07:00:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:40.757 lslocks: write error 00:04:40.757 07:00:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1006932 00:04:40.757 07:00:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 1006932 ']' 00:04:40.757 07:00:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 1006932 00:04:40.757 07:00:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:04:40.757 07:00:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:40.757 07:00:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1006932 00:04:40.757 07:00:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:40.757 07:00:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:40.757 07:00:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1006932' 00:04:40.757 killing process with pid 1006932 00:04:40.757 07:00:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 1006932 00:04:40.757 07:00:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 1006932 00:04:41.326 00:04:41.326 real 0m1.980s 00:04:41.326 user 0m2.139s 00:04:41.326 sys 0m0.641s 00:04:41.327 07:00:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:41.327 07:00:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:41.327 ************************************ 00:04:41.327 END TEST locking_app_on_locked_coremask 00:04:41.327 ************************************ 00:04:41.327 07:00:45 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:41.327 07:00:45 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:41.327 07:00:45 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:41.327 07:00:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:41.327 ************************************ 00:04:41.327 START TEST locking_overlapped_coremask 00:04:41.327 ************************************ 00:04:41.327 07:00:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:04:41.327 07:00:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1007418 00:04:41.327 07:00:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1007418 /var/tmp/spdk.sock 00:04:41.327 07:00:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:41.327 07:00:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 1007418 ']' 00:04:41.327 07:00:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.327 07:00:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:41.327 07:00:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.327 07:00:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:41.327 07:00:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:41.327 [2024-11-20 07:00:45.713332] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:04:41.327 [2024-11-20 07:00:45.713373] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1007418 ] 00:04:41.327 [2024-11-20 07:00:45.789361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:41.327 [2024-11-20 07:00:45.834445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.327 [2024-11-20 07:00:45.834479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.327 [2024-11-20 07:00:45.834479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:41.594 07:00:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:41.594 07:00:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:04:41.594 07:00:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1007426 00:04:41.594 07:00:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1007426 /var/tmp/spdk2.sock 00:04:41.594 07:00:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:41.594 07:00:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:04:41.594 07:00:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1007426 /var/tmp/spdk2.sock 00:04:41.594 07:00:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:41.594 07:00:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:41.594 07:00:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:41.594 07:00:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:41.594 07:00:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1007426 /var/tmp/spdk2.sock 00:04:41.594 07:00:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 1007426 ']' 00:04:41.594 07:00:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:41.594 07:00:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:41.594 07:00:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:41.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:41.594 07:00:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:41.594 07:00:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:41.594 [2024-11-20 07:00:46.095287] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:04:41.594 [2024-11-20 07:00:46.095337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1007426 ] 00:04:41.853 [2024-11-20 07:00:46.186573] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1007418 has claimed it. 00:04:41.853 [2024-11-20 07:00:46.186609] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:42.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (1007426) - No such process 00:04:42.421 ERROR: process (pid: 1007426) is no longer running 00:04:42.421 07:00:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:42.421 07:00:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:04:42.421 07:00:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:04:42.421 07:00:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:42.421 07:00:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:42.421 07:00:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:42.421 07:00:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:42.421 07:00:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:42.421 07:00:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:42.421 07:00:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:42.421 07:00:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1007418 00:04:42.421 07:00:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 1007418 ']' 00:04:42.421 07:00:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 1007418 00:04:42.421 07:00:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:04:42.421 07:00:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:42.421 07:00:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1007418 00:04:42.421 07:00:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:42.421 07:00:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:42.421 07:00:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1007418' 00:04:42.421 killing process with pid 1007418 00:04:42.421 07:00:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 1007418 00:04:42.421 07:00:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 1007418 00:04:42.690 00:04:42.690 real 0m1.437s 00:04:42.690 user 0m3.958s 00:04:42.690 sys 0m0.388s 00:04:42.690 07:00:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:42.690 07:00:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:42.690 ************************************ 00:04:42.690 END TEST locking_overlapped_coremask 00:04:42.690 ************************************ 00:04:42.690 07:00:47 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:42.690 07:00:47 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:42.690 07:00:47 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:42.690 07:00:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:42.690 ************************************ 00:04:42.690 START TEST locking_overlapped_coremask_via_rpc 00:04:42.690 ************************************ 00:04:42.690 07:00:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:04:42.690 07:00:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1007688 00:04:42.690 07:00:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1007688 /var/tmp/spdk.sock 00:04:42.690 07:00:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:42.690 07:00:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 1007688 ']' 00:04:42.690 07:00:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.690 07:00:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:42.690 07:00:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.690 07:00:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:42.690 07:00:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.690 [2024-11-20 07:00:47.221464] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:04:42.690 [2024-11-20 07:00:47.221512] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1007688 ] 00:04:42.949 [2024-11-20 07:00:47.295535] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:42.949 [2024-11-20 07:00:47.295560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:42.949 [2024-11-20 07:00:47.336490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.949 [2024-11-20 07:00:47.336598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.949 [2024-11-20 07:00:47.336599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:43.207 07:00:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:43.207 07:00:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:43.207 07:00:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1007698 00:04:43.207 07:00:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1007698 /var/tmp/spdk2.sock 00:04:43.207 07:00:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:43.207 07:00:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 1007698 ']' 00:04:43.207 07:00:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:43.207 07:00:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:43.207 07:00:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:43.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:43.207 07:00:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:43.207 07:00:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.207 [2024-11-20 07:00:47.611895] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:04:43.207 [2024-11-20 07:00:47.611944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1007698 ] 00:04:43.207 [2024-11-20 07:00:47.704809] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:43.207 [2024-11-20 07:00:47.704839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:43.465 [2024-11-20 07:00:47.793810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:43.465 [2024-11-20 07:00:47.796995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:43.465 [2024-11-20 07:00:47.796995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:04:44.034 07:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:44.034 07:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:44.034 07:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:44.034 07:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.034 07:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.034 07:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.035 07:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:44.035 07:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:44.035 07:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:44.035 07:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:44.035 07:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:44.035 07:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:44.035 07:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:44.035 07:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:44.035 07:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.035 07:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.035 [2024-11-20 07:00:48.478026] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1007688 has claimed it. 00:04:44.035 request: 00:04:44.035 { 00:04:44.035 "method": "framework_enable_cpumask_locks", 00:04:44.035 "req_id": 1 00:04:44.035 } 00:04:44.035 Got JSON-RPC error response 00:04:44.035 response: 00:04:44.035 { 00:04:44.035 "code": -32603, 00:04:44.035 "message": "Failed to claim CPU core: 2" 00:04:44.035 } 00:04:44.035 07:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:44.035 07:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:44.035 07:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:44.035 07:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:44.035 07:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:44.035 07:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1007688 /var/tmp/spdk.sock 00:04:44.035 07:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 1007688 ']' 00:04:44.035 07:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.035 07:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:44.035 07:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.035 07:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:44.035 07:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.331 07:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:44.331 07:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:44.331 07:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1007698 /var/tmp/spdk2.sock 00:04:44.331 07:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 1007698 ']' 00:04:44.331 07:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:44.331 07:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:44.331 07:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:44.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:44.331 07:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:44.331 07:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.619 07:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:44.619 07:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:44.619 07:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:44.619 07:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:44.619 07:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:44.619 07:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:44.619 00:04:44.619 real 0m1.718s 00:04:44.619 user 0m0.835s 00:04:44.619 sys 0m0.130s 00:04:44.619 07:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:44.619 07:00:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.619 ************************************ 00:04:44.619 END TEST locking_overlapped_coremask_via_rpc 00:04:44.619 ************************************ 00:04:44.619 07:00:48 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:44.619 07:00:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1007688 ]] 00:04:44.619 07:00:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1007688 00:04:44.619 07:00:48 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 1007688 ']' 00:04:44.619 07:00:48 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 1007688 00:04:44.619 07:00:48 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:04:44.619 07:00:48 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:44.619 07:00:48 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1007688 00:04:44.619 07:00:48 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:44.619 07:00:48 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:44.619 07:00:48 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1007688' 00:04:44.619 killing process with pid 1007688 00:04:44.619 07:00:48 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 1007688 00:04:44.619 07:00:48 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 1007688 00:04:44.925 07:00:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1007698 ]] 00:04:44.925 07:00:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1007698 00:04:44.925 07:00:49 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 1007698 ']' 00:04:44.925 07:00:49 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 1007698 00:04:44.925 07:00:49 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:04:44.926 07:00:49 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:44.926 07:00:49 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1007698 00:04:44.926 07:00:49 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:04:44.926 07:00:49 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:04:44.926 07:00:49 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1007698' 00:04:44.926 killing process with pid 1007698 00:04:44.926 07:00:49 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 1007698 00:04:44.926 07:00:49 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 1007698 00:04:45.185 07:00:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:45.185 07:00:49 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:45.185 07:00:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1007688 ]] 00:04:45.185 07:00:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1007688 00:04:45.185 07:00:49 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 1007688 ']' 00:04:45.185 07:00:49 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 1007688 00:04:45.185 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (1007688) - No such process 00:04:45.185 07:00:49 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 1007688 is not found' 00:04:45.185 Process with pid 1007688 is not found 00:04:45.185 07:00:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1007698 ]] 00:04:45.185 07:00:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1007698 00:04:45.185 07:00:49 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 1007698 ']' 00:04:45.185 07:00:49 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 1007698 00:04:45.185 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (1007698) - No such process 00:04:45.185 07:00:49 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 1007698 is not found' 00:04:45.185 Process with pid 1007698 is not found 00:04:45.185 07:00:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:45.185 00:04:45.185 real 0m14.845s 00:04:45.185 user 0m25.401s 00:04:45.185 sys 0m5.017s 00:04:45.185 07:00:49 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:45.185 07:00:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:45.185 ************************************ 00:04:45.185 END TEST cpu_locks 00:04:45.185 ************************************ 00:04:45.185 00:04:45.185 real 0m39.135s 00:04:45.185 user 1m13.576s 00:04:45.185 sys 0m8.515s 00:04:45.185 07:00:49 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:45.185 07:00:49 event -- common/autotest_common.sh@10 -- # set +x 00:04:45.185 ************************************ 00:04:45.185 END TEST event 00:04:45.185 ************************************ 00:04:45.185 07:00:49 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:45.185 07:00:49 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:45.185 07:00:49 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:45.185 07:00:49 -- common/autotest_common.sh@10 -- # set +x 00:04:45.443 ************************************ 00:04:45.443 START TEST thread 00:04:45.443 ************************************ 00:04:45.443 07:00:49 thread -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:45.443 * Looking for test storage... 00:04:45.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:45.443 07:00:49 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:45.443 07:00:49 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:04:45.443 07:00:49 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:45.443 07:00:49 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:45.443 07:00:49 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.443 07:00:49 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.443 07:00:49 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.443 07:00:49 thread -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.443 07:00:49 thread -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.443 07:00:49 thread -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.443 07:00:49 thread -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.443 07:00:49 thread -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.443 07:00:49 thread -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.443 07:00:49 thread -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.443 07:00:49 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.443 07:00:49 thread -- scripts/common.sh@344 -- # case "$op" in 00:04:45.443 07:00:49 thread -- scripts/common.sh@345 -- # : 1 00:04:45.443 07:00:49 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.443 07:00:49 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.443 07:00:49 thread -- scripts/common.sh@365 -- # decimal 1 00:04:45.443 07:00:49 thread -- scripts/common.sh@353 -- # local d=1 00:04:45.443 07:00:49 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.443 07:00:49 thread -- scripts/common.sh@355 -- # echo 1 00:04:45.443 07:00:49 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.443 07:00:49 thread -- scripts/common.sh@366 -- # decimal 2 00:04:45.443 07:00:49 thread -- scripts/common.sh@353 -- # local d=2 00:04:45.443 07:00:49 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.443 07:00:49 thread -- scripts/common.sh@355 -- # echo 2 00:04:45.443 07:00:49 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.443 07:00:49 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.443 07:00:49 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.443 07:00:49 thread -- scripts/common.sh@368 -- # return 0 00:04:45.444 07:00:49 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.444 07:00:49 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:45.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.444 --rc genhtml_branch_coverage=1 00:04:45.444 --rc genhtml_function_coverage=1 00:04:45.444 --rc genhtml_legend=1 00:04:45.444 --rc geninfo_all_blocks=1 00:04:45.444 --rc geninfo_unexecuted_blocks=1 00:04:45.444 00:04:45.444 ' 00:04:45.444 07:00:49 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:45.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.444 --rc genhtml_branch_coverage=1 00:04:45.444 --rc genhtml_function_coverage=1 00:04:45.444 --rc genhtml_legend=1 00:04:45.444 --rc geninfo_all_blocks=1 00:04:45.444 --rc geninfo_unexecuted_blocks=1 00:04:45.444 00:04:45.444 ' 00:04:45.444 07:00:49 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:45.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.444 --rc genhtml_branch_coverage=1 00:04:45.444 --rc genhtml_function_coverage=1 00:04:45.444 --rc genhtml_legend=1 00:04:45.444 --rc geninfo_all_blocks=1 00:04:45.444 --rc geninfo_unexecuted_blocks=1 00:04:45.444 00:04:45.444 ' 00:04:45.444 07:00:49 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:45.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.444 --rc genhtml_branch_coverage=1 00:04:45.444 --rc genhtml_function_coverage=1 00:04:45.444 --rc genhtml_legend=1 00:04:45.444 --rc geninfo_all_blocks=1 00:04:45.444 --rc geninfo_unexecuted_blocks=1 00:04:45.444 00:04:45.444 ' 00:04:45.444 07:00:49 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:45.444 07:00:49 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:04:45.444 07:00:49 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:45.444 07:00:49 thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.444 ************************************ 00:04:45.444 START TEST thread_poller_perf 00:04:45.444 ************************************ 00:04:45.444 07:00:49 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:45.444 [2024-11-20 07:00:49.984808] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:04:45.444 [2024-11-20 07:00:49.984877] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1008263 ] 00:04:45.702 [2024-11-20 07:00:50.066842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.702 [2024-11-20 07:00:50.112943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.702 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:46.638 [2024-11-20T06:00:51.194Z] ====================================== 00:04:46.638 [2024-11-20T06:00:51.194Z] busy:2308772344 (cyc) 00:04:46.638 [2024-11-20T06:00:51.194Z] total_run_count: 392000 00:04:46.638 [2024-11-20T06:00:51.194Z] tsc_hz: 2300000000 (cyc) 00:04:46.638 [2024-11-20T06:00:51.194Z] ====================================== 00:04:46.638 [2024-11-20T06:00:51.194Z] poller_cost: 5889 (cyc), 2560 (nsec) 00:04:46.638 00:04:46.638 real 0m1.195s 00:04:46.638 user 0m1.116s 00:04:46.638 sys 0m0.075s 00:04:46.638 07:00:51 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:46.638 07:00:51 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:46.638 ************************************ 00:04:46.638 END TEST thread_poller_perf 00:04:46.638 ************************************ 00:04:46.897 07:00:51 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:46.897 07:00:51 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:04:46.897 07:00:51 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:46.897 07:00:51 thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.897 ************************************ 00:04:46.897 START TEST thread_poller_perf 00:04:46.897 ************************************ 00:04:46.897 07:00:51 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:46.897 [2024-11-20 07:00:51.252441] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:04:46.897 [2024-11-20 07:00:51.252512] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1008519 ] 00:04:46.897 [2024-11-20 07:00:51.331460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.897 [2024-11-20 07:00:51.373165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.897 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:48.274 [2024-11-20T06:00:52.830Z] ====================================== 00:04:48.274 [2024-11-20T06:00:52.830Z] busy:2301642264 (cyc) 00:04:48.274 [2024-11-20T06:00:52.830Z] total_run_count: 5384000 00:04:48.274 [2024-11-20T06:00:52.830Z] tsc_hz: 2300000000 (cyc) 00:04:48.274 [2024-11-20T06:00:52.830Z] ====================================== 00:04:48.274 [2024-11-20T06:00:52.830Z] poller_cost: 427 (cyc), 185 (nsec) 00:04:48.274 00:04:48.274 real 0m1.184s 00:04:48.274 user 0m1.106s 00:04:48.274 sys 0m0.074s 00:04:48.274 07:00:52 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:48.274 07:00:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:48.274 ************************************ 00:04:48.274 END TEST thread_poller_perf 00:04:48.274 ************************************ 00:04:48.274 07:00:52 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:48.274 00:04:48.274 real 0m2.693s 00:04:48.274 user 0m2.383s 00:04:48.274 sys 0m0.323s 00:04:48.274 07:00:52 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:48.274 07:00:52 thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.274 ************************************ 00:04:48.274 END TEST thread 00:04:48.274 ************************************ 00:04:48.274 07:00:52 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:04:48.274 07:00:52 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:48.274 07:00:52 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:48.274 07:00:52 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:48.274 07:00:52 -- common/autotest_common.sh@10 -- # set +x 00:04:48.274 ************************************ 00:04:48.274 START TEST app_cmdline 00:04:48.274 ************************************ 00:04:48.274 07:00:52 app_cmdline -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:48.274 * Looking for test storage... 00:04:48.274 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:48.274 07:00:52 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:48.274 07:00:52 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:04:48.274 07:00:52 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:48.274 07:00:52 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:48.274 07:00:52 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.274 07:00:52 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.274 07:00:52 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.274 07:00:52 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.274 07:00:52 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.274 07:00:52 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.274 07:00:52 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.274 07:00:52 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.274 07:00:52 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.274 07:00:52 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.274 07:00:52 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.274 07:00:52 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:04:48.274 07:00:52 app_cmdline -- scripts/common.sh@345 -- # : 1 00:04:48.274 07:00:52 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.274 07:00:52 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.274 07:00:52 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:04:48.274 07:00:52 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:04:48.274 07:00:52 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.274 07:00:52 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:04:48.274 07:00:52 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.274 07:00:52 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:04:48.274 07:00:52 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:04:48.274 07:00:52 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.274 07:00:52 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:04:48.274 07:00:52 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.274 07:00:52 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.274 07:00:52 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.274 07:00:52 app_cmdline -- scripts/common.sh@368 -- # return 0 00:04:48.274 07:00:52 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.274 07:00:52 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:48.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.274 --rc genhtml_branch_coverage=1 00:04:48.274 --rc genhtml_function_coverage=1 00:04:48.274 --rc genhtml_legend=1 00:04:48.274 --rc geninfo_all_blocks=1 00:04:48.274 --rc geninfo_unexecuted_blocks=1 00:04:48.274 00:04:48.274 ' 00:04:48.274 07:00:52 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:48.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.274 --rc genhtml_branch_coverage=1 00:04:48.274 --rc genhtml_function_coverage=1 00:04:48.274 --rc genhtml_legend=1 00:04:48.274 --rc geninfo_all_blocks=1 00:04:48.274 --rc geninfo_unexecuted_blocks=1 00:04:48.274 00:04:48.274 ' 00:04:48.274 07:00:52 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:48.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.274 --rc genhtml_branch_coverage=1 00:04:48.274 --rc genhtml_function_coverage=1 00:04:48.274 --rc genhtml_legend=1 00:04:48.274 --rc geninfo_all_blocks=1 00:04:48.274 --rc geninfo_unexecuted_blocks=1 00:04:48.274 00:04:48.274 ' 00:04:48.274 07:00:52 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:48.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.274 --rc genhtml_branch_coverage=1 00:04:48.274 --rc genhtml_function_coverage=1 00:04:48.275 --rc genhtml_legend=1 00:04:48.275 --rc geninfo_all_blocks=1 00:04:48.275 --rc geninfo_unexecuted_blocks=1 00:04:48.275 00:04:48.275 ' 00:04:48.275 07:00:52 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:04:48.275 07:00:52 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1008815 00:04:48.275 07:00:52 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:04:48.275 07:00:52 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1008815 00:04:48.275 07:00:52 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 1008815 ']' 00:04:48.275 07:00:52 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.275 07:00:52 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:48.275 07:00:52 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.275 07:00:52 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:48.275 07:00:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:48.275 [2024-11-20 07:00:52.750568] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:04:48.275 [2024-11-20 07:00:52.750615] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1008815 ] 00:04:48.533 [2024-11-20 07:00:52.826623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.533 [2024-11-20 07:00:52.870632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.533 07:00:53 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:48.533 07:00:53 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:04:48.533 07:00:53 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:04:48.791 { 00:04:48.791 "version": "SPDK v25.01-pre git sha1 6745f139b", 00:04:48.791 "fields": { 00:04:48.791 "major": 25, 00:04:48.791 "minor": 1, 00:04:48.791 "patch": 0, 00:04:48.791 "suffix": "-pre", 00:04:48.791 "commit": "6745f139b" 00:04:48.791 } 00:04:48.792 } 00:04:48.792 07:00:53 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:04:48.792 07:00:53 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:04:48.792 07:00:53 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:04:48.792 07:00:53 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:04:48.792 07:00:53 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:04:48.792 07:00:53 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:04:48.792 07:00:53 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.792 07:00:53 app_cmdline -- app/cmdline.sh@26 -- # sort 00:04:48.792 07:00:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:48.792 07:00:53 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.792 07:00:53 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:04:48.792 07:00:53 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:04:48.792 07:00:53 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:48.792 07:00:53 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:04:48.792 07:00:53 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:48.792 07:00:53 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:48.792 07:00:53 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:48.792 07:00:53 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:48.792 07:00:53 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:48.792 07:00:53 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:48.792 07:00:53 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:48.792 07:00:53 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:48.792 07:00:53 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:04:48.792 07:00:53 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:49.050 request: 00:04:49.050 { 00:04:49.050 "method": "env_dpdk_get_mem_stats", 00:04:49.050 "req_id": 1 00:04:49.050 } 00:04:49.050 Got JSON-RPC error response 00:04:49.050 response: 00:04:49.050 { 00:04:49.050 "code": -32601, 00:04:49.050 "message": "Method not found" 00:04:49.050 } 00:04:49.050 07:00:53 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:04:49.050 07:00:53 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:49.050 07:00:53 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:49.050 07:00:53 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:49.050 07:00:53 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1008815 00:04:49.050 07:00:53 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 1008815 ']' 00:04:49.050 07:00:53 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 1008815 00:04:49.050 07:00:53 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:04:49.050 07:00:53 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:49.050 07:00:53 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1008815 00:04:49.050 07:00:53 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:49.050 07:00:53 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:49.050 07:00:53 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1008815' 00:04:49.050 killing process with pid 1008815 00:04:49.050 07:00:53 app_cmdline -- common/autotest_common.sh@971 -- # kill 1008815 00:04:49.050 07:00:53 app_cmdline -- common/autotest_common.sh@976 -- # wait 1008815 00:04:49.619 00:04:49.619 real 0m1.351s 00:04:49.619 user 0m1.561s 00:04:49.619 sys 0m0.463s 00:04:49.619 07:00:53 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:49.619 07:00:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:49.619 ************************************ 00:04:49.619 END TEST app_cmdline 00:04:49.619 ************************************ 00:04:49.619 07:00:53 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:49.619 07:00:53 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:49.619 07:00:53 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:49.619 07:00:53 -- common/autotest_common.sh@10 -- # set +x 00:04:49.619 ************************************ 00:04:49.619 START TEST version 00:04:49.619 ************************************ 00:04:49.619 07:00:53 version -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:49.619 * Looking for test storage... 00:04:49.619 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:49.619 07:00:54 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:49.619 07:00:54 version -- common/autotest_common.sh@1691 -- # lcov --version 00:04:49.619 07:00:54 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:49.619 07:00:54 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:49.619 07:00:54 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.619 07:00:54 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.619 07:00:54 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.619 07:00:54 version -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.619 07:00:54 version -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.619 07:00:54 version -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.619 07:00:54 version -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.619 07:00:54 version -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.619 07:00:54 version -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.619 07:00:54 version -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.619 07:00:54 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.619 07:00:54 version -- scripts/common.sh@344 -- # case "$op" in 00:04:49.619 07:00:54 version -- scripts/common.sh@345 -- # : 1 00:04:49.619 07:00:54 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.619 07:00:54 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.619 07:00:54 version -- scripts/common.sh@365 -- # decimal 1 00:04:49.619 07:00:54 version -- scripts/common.sh@353 -- # local d=1 00:04:49.619 07:00:54 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.619 07:00:54 version -- scripts/common.sh@355 -- # echo 1 00:04:49.619 07:00:54 version -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.619 07:00:54 version -- scripts/common.sh@366 -- # decimal 2 00:04:49.619 07:00:54 version -- scripts/common.sh@353 -- # local d=2 00:04:49.619 07:00:54 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.619 07:00:54 version -- scripts/common.sh@355 -- # echo 2 00:04:49.619 07:00:54 version -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.619 07:00:54 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.619 07:00:54 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.619 07:00:54 version -- scripts/common.sh@368 -- # return 0 00:04:49.619 07:00:54 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.619 07:00:54 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:49.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.619 --rc genhtml_branch_coverage=1 00:04:49.619 --rc genhtml_function_coverage=1 00:04:49.619 --rc genhtml_legend=1 00:04:49.619 --rc geninfo_all_blocks=1 00:04:49.619 --rc geninfo_unexecuted_blocks=1 00:04:49.619 00:04:49.619 ' 00:04:49.619 07:00:54 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:49.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.619 --rc genhtml_branch_coverage=1 00:04:49.619 --rc genhtml_function_coverage=1 00:04:49.619 --rc genhtml_legend=1 00:04:49.619 --rc geninfo_all_blocks=1 00:04:49.619 --rc geninfo_unexecuted_blocks=1 00:04:49.619 00:04:49.619 ' 00:04:49.619 07:00:54 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:49.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.619 --rc genhtml_branch_coverage=1 00:04:49.619 --rc genhtml_function_coverage=1 00:04:49.619 --rc genhtml_legend=1 00:04:49.619 --rc geninfo_all_blocks=1 00:04:49.619 --rc geninfo_unexecuted_blocks=1 00:04:49.619 00:04:49.619 ' 00:04:49.619 07:00:54 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:49.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.619 --rc genhtml_branch_coverage=1 00:04:49.619 --rc genhtml_function_coverage=1 00:04:49.619 --rc genhtml_legend=1 00:04:49.619 --rc geninfo_all_blocks=1 00:04:49.619 --rc geninfo_unexecuted_blocks=1 00:04:49.619 00:04:49.619 ' 00:04:49.619 07:00:54 version -- app/version.sh@17 -- # get_header_version major 00:04:49.619 07:00:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:49.619 07:00:54 version -- app/version.sh@14 -- # tr -d '"' 00:04:49.619 07:00:54 version -- app/version.sh@14 -- # cut -f2 00:04:49.619 07:00:54 version -- app/version.sh@17 -- # major=25 00:04:49.619 07:00:54 version -- app/version.sh@18 -- # get_header_version minor 00:04:49.619 07:00:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:49.619 07:00:54 version -- app/version.sh@14 -- # cut -f2 00:04:49.619 07:00:54 version -- app/version.sh@14 -- # tr -d '"' 00:04:49.619 07:00:54 version -- app/version.sh@18 -- # minor=1 00:04:49.619 07:00:54 version -- app/version.sh@19 -- # get_header_version patch 00:04:49.619 07:00:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:49.619 07:00:54 version -- app/version.sh@14 -- # cut -f2 00:04:49.619 07:00:54 version -- app/version.sh@14 -- # tr -d '"' 00:04:49.619 07:00:54 version -- app/version.sh@19 -- # patch=0 00:04:49.619 07:00:54 version -- app/version.sh@20 -- # get_header_version suffix 00:04:49.619 07:00:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:49.619 07:00:54 version -- app/version.sh@14 -- # cut -f2 00:04:49.619 07:00:54 version -- app/version.sh@14 -- # tr -d '"' 00:04:49.619 07:00:54 version -- app/version.sh@20 -- # suffix=-pre 00:04:49.619 07:00:54 version -- app/version.sh@22 -- # version=25.1 00:04:49.619 07:00:54 version -- app/version.sh@25 -- # (( patch != 0 )) 00:04:49.619 07:00:54 version -- app/version.sh@28 -- # version=25.1rc0 00:04:49.619 07:00:54 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:04:49.619 07:00:54 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:04:49.879 07:00:54 version -- app/version.sh@30 -- # py_version=25.1rc0 00:04:49.879 07:00:54 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:04:49.879 00:04:49.879 real 0m0.243s 00:04:49.879 user 0m0.150s 00:04:49.879 sys 0m0.137s 00:04:49.879 07:00:54 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:49.879 07:00:54 version -- common/autotest_common.sh@10 -- # set +x 00:04:49.879 ************************************ 00:04:49.879 END TEST version 00:04:49.879 ************************************ 00:04:49.879 07:00:54 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:04:49.879 07:00:54 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:04:49.879 07:00:54 -- spdk/autotest.sh@194 -- # uname -s 00:04:49.879 07:00:54 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:04:49.879 07:00:54 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:49.879 07:00:54 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:49.879 07:00:54 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:04:49.879 07:00:54 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:04:49.879 07:00:54 -- spdk/autotest.sh@256 -- # timing_exit lib 00:04:49.879 07:00:54 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:49.879 07:00:54 -- common/autotest_common.sh@10 -- # set +x 00:04:49.879 07:00:54 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:04:49.879 07:00:54 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:04:49.879 07:00:54 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:04:49.879 07:00:54 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:04:49.879 07:00:54 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:04:49.879 07:00:54 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:04:49.879 07:00:54 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:49.879 07:00:54 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:04:49.879 07:00:54 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:49.879 07:00:54 -- common/autotest_common.sh@10 -- # set +x 00:04:49.879 ************************************ 00:04:49.879 START TEST nvmf_tcp 00:04:49.879 ************************************ 00:04:49.879 07:00:54 nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:49.879 * Looking for test storage... 00:04:49.879 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:49.879 07:00:54 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:49.879 07:00:54 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:49.879 07:00:54 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:50.138 07:00:54 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:50.138 07:00:54 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.138 07:00:54 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.138 07:00:54 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.138 07:00:54 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.138 07:00:54 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.138 07:00:54 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.138 07:00:54 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.138 07:00:54 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.138 07:00:54 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.138 07:00:54 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.138 07:00:54 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.138 07:00:54 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:50.138 07:00:54 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:04:50.138 07:00:54 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.138 07:00:54 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.138 07:00:54 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:50.138 07:00:54 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:04:50.138 07:00:54 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.138 07:00:54 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:04:50.138 07:00:54 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.138 07:00:54 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:50.138 07:00:54 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:04:50.138 07:00:54 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.138 07:00:54 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:04:50.138 07:00:54 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.138 07:00:54 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.138 07:00:54 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.138 07:00:54 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:04:50.138 07:00:54 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.138 07:00:54 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:50.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.138 --rc genhtml_branch_coverage=1 00:04:50.138 --rc genhtml_function_coverage=1 00:04:50.138 --rc genhtml_legend=1 00:04:50.138 --rc geninfo_all_blocks=1 00:04:50.138 --rc geninfo_unexecuted_blocks=1 00:04:50.138 00:04:50.138 ' 00:04:50.138 07:00:54 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:50.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.138 --rc genhtml_branch_coverage=1 00:04:50.138 --rc genhtml_function_coverage=1 00:04:50.138 --rc genhtml_legend=1 00:04:50.138 --rc geninfo_all_blocks=1 00:04:50.138 --rc geninfo_unexecuted_blocks=1 00:04:50.138 00:04:50.138 ' 00:04:50.138 07:00:54 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:50.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.138 --rc genhtml_branch_coverage=1 00:04:50.138 --rc genhtml_function_coverage=1 00:04:50.138 --rc genhtml_legend=1 00:04:50.138 --rc geninfo_all_blocks=1 00:04:50.138 --rc geninfo_unexecuted_blocks=1 00:04:50.138 00:04:50.138 ' 00:04:50.138 07:00:54 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:50.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.138 --rc genhtml_branch_coverage=1 00:04:50.138 --rc genhtml_function_coverage=1 00:04:50.138 --rc genhtml_legend=1 00:04:50.138 --rc geninfo_all_blocks=1 00:04:50.138 --rc geninfo_unexecuted_blocks=1 00:04:50.138 00:04:50.138 ' 00:04:50.138 07:00:54 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:04:50.138 07:00:54 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:50.138 07:00:54 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:50.139 07:00:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:04:50.139 07:00:54 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:50.139 07:00:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:50.139 ************************************ 00:04:50.139 START TEST nvmf_target_core 00:04:50.139 ************************************ 00:04:50.139 07:00:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:50.139 * Looking for test storage... 00:04:50.139 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:50.139 07:00:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:50.139 07:00:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:04:50.139 07:00:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:50.139 07:00:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:50.139 07:00:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.139 07:00:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.139 07:00:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.139 07:00:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.139 07:00:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.139 07:00:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.139 07:00:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.139 07:00:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.139 07:00:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.139 07:00:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.139 07:00:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.139 07:00:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:04:50.139 07:00:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:04:50.139 07:00:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.139 07:00:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.139 07:00:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:04:50.139 07:00:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:04:50.139 07:00:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.139 07:00:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:04:50.139 07:00:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.139 07:00:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:04:50.139 07:00:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:04:50.139 07:00:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.139 07:00:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:04:50.139 07:00:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.139 07:00:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.139 07:00:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.139 07:00:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:04:50.139 07:00:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.139 07:00:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:50.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.139 --rc genhtml_branch_coverage=1 00:04:50.139 --rc genhtml_function_coverage=1 00:04:50.139 --rc genhtml_legend=1 00:04:50.139 --rc geninfo_all_blocks=1 00:04:50.139 --rc geninfo_unexecuted_blocks=1 00:04:50.139 00:04:50.139 ' 00:04:50.139 07:00:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:50.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.139 --rc genhtml_branch_coverage=1 00:04:50.139 --rc genhtml_function_coverage=1 00:04:50.139 --rc genhtml_legend=1 00:04:50.139 --rc geninfo_all_blocks=1 00:04:50.139 --rc geninfo_unexecuted_blocks=1 00:04:50.139 00:04:50.139 ' 00:04:50.139 07:00:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:50.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.139 --rc genhtml_branch_coverage=1 00:04:50.139 --rc genhtml_function_coverage=1 00:04:50.139 --rc genhtml_legend=1 00:04:50.139 --rc geninfo_all_blocks=1 00:04:50.139 --rc geninfo_unexecuted_blocks=1 00:04:50.139 00:04:50.139 ' 00:04:50.139 07:00:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:50.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.139 --rc genhtml_branch_coverage=1 00:04:50.139 --rc genhtml_function_coverage=1 00:04:50.139 --rc genhtml_legend=1 00:04:50.139 --rc geninfo_all_blocks=1 00:04:50.139 --rc geninfo_unexecuted_blocks=1 00:04:50.139 00:04:50.139 ' 00:04:50.139 07:00:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:04:50.398 07:00:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:50.398 07:00:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:50.398 07:00:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:04:50.398 07:00:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:50.398 07:00:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:50.398 07:00:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:50.398 07:00:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:50.398 07:00:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:50.398 07:00:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:50.398 07:00:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:50.398 07:00:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:50.398 07:00:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:50.398 07:00:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:50.398 07:00:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:50.398 07:00:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:50.398 07:00:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:50.398 07:00:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:50.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:50.399 ************************************ 00:04:50.399 START TEST nvmf_abort 00:04:50.399 ************************************ 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:50.399 * Looking for test storage... 00:04:50.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:50.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.399 --rc genhtml_branch_coverage=1 00:04:50.399 --rc genhtml_function_coverage=1 00:04:50.399 --rc genhtml_legend=1 00:04:50.399 --rc geninfo_all_blocks=1 00:04:50.399 --rc geninfo_unexecuted_blocks=1 00:04:50.399 00:04:50.399 ' 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:50.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.399 --rc genhtml_branch_coverage=1 00:04:50.399 --rc genhtml_function_coverage=1 00:04:50.399 --rc genhtml_legend=1 00:04:50.399 --rc geninfo_all_blocks=1 00:04:50.399 --rc geninfo_unexecuted_blocks=1 00:04:50.399 00:04:50.399 ' 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:50.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.399 --rc genhtml_branch_coverage=1 00:04:50.399 --rc genhtml_function_coverage=1 00:04:50.399 --rc genhtml_legend=1 00:04:50.399 --rc geninfo_all_blocks=1 00:04:50.399 --rc geninfo_unexecuted_blocks=1 00:04:50.399 00:04:50.399 ' 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:50.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.399 --rc genhtml_branch_coverage=1 00:04:50.399 --rc genhtml_function_coverage=1 00:04:50.399 --rc genhtml_legend=1 00:04:50.399 --rc geninfo_all_blocks=1 00:04:50.399 --rc geninfo_unexecuted_blocks=1 00:04:50.399 00:04:50.399 ' 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:50.399 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:50.400 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:50.400 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:50.658 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:50.658 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:50.658 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:50.658 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:50.658 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:50.658 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:50.658 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:50.658 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:04:50.658 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:50.658 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:50.658 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:50.658 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.658 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.658 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.658 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:04:50.658 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.658 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:04:50.658 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:50.658 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:50.658 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:50.658 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:50.658 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:50.658 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:50.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:50.659 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:50.659 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:50.659 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:50.659 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:04:50.659 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:04:50.659 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:04:50.659 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:04:50.659 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:50.659 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:50.659 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:50.659 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:50.659 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:50.659 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:50.659 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:50.659 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:04:50.659 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:50.659 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:04:50.659 07:00:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:57.224 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:04:57.224 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:04:57.224 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:04:57.224 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:04:57.224 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:04:57.224 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:04:57.224 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:04:57.224 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:04:57.224 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:04:57.224 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:04:57.224 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:04:57.224 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:04:57.224 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:04:57.224 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:04:57.224 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:04:57.224 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:57.224 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:57.224 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:57.224 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:04:57.225 Found 0000:86:00.0 (0x8086 - 0x159b) 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:04:57.225 Found 0000:86:00.1 (0x8086 - 0x159b) 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:04:57.225 Found net devices under 0000:86:00.0: cvl_0_0 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:04:57.225 Found net devices under 0000:86:00.1: cvl_0_1 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:04:57.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:04:57.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.428 ms 00:04:57.225 00:04:57.225 --- 10.0.0.2 ping statistics --- 00:04:57.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:57.225 rtt min/avg/max/mdev = 0.428/0.428/0.428/0.000 ms 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:04:57.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:04:57.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:04:57.225 00:04:57.225 --- 10.0.0.1 ping statistics --- 00:04:57.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:57.225 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1012490 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1012490 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 1012490 ']' 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:57.225 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.226 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:57.226 07:01:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:57.226 [2024-11-20 07:01:00.998347] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:04:57.226 [2024-11-20 07:01:00.998397] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:04:57.226 [2024-11-20 07:01:01.079905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:57.226 [2024-11-20 07:01:01.124578] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:04:57.226 [2024-11-20 07:01:01.124617] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:04:57.226 [2024-11-20 07:01:01.124624] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:57.226 [2024-11-20 07:01:01.124630] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:57.226 [2024-11-20 07:01:01.124635] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:04:57.226 [2024-11-20 07:01:01.126006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:57.226 [2024-11-20 07:01:01.126113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.226 [2024-11-20 07:01:01.126114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:57.484 07:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:57.484 07:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:04:57.484 07:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:04:57.484 07:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:57.484 07:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:57.484 07:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:04:57.484 07:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:04:57.484 07:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.484 07:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:57.484 [2024-11-20 07:01:01.877815] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:57.484 07:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.484 07:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:04:57.484 07:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.484 07:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:57.484 Malloc0 00:04:57.484 07:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.484 07:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:04:57.484 07:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.485 07:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:57.485 Delay0 00:04:57.485 07:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.485 07:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:04:57.485 07:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.485 07:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:57.485 07:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.485 07:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:04:57.485 07:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.485 07:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:57.485 07:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.485 07:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:04:57.485 07:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.485 07:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:57.485 [2024-11-20 07:01:01.952998] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:04:57.485 07:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.485 07:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:04:57.485 07:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.485 07:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:57.485 07:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.485 07:01:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:04:57.743 [2024-11-20 07:01:02.090223] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:04:59.645 Initializing NVMe Controllers 00:04:59.645 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:04:59.645 controller IO queue size 128 less than required 00:04:59.645 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:04:59.645 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:04:59.645 Initialization complete. Launching workers. 00:04:59.645 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 36697 00:04:59.645 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36762, failed to submit 62 00:04:59.645 success 36701, unsuccessful 61, failed 0 00:04:59.645 07:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:04:59.645 07:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.645 07:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:59.645 07:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.645 07:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:04:59.645 07:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:04:59.645 07:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:04:59.645 07:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:04:59.645 07:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:04:59.645 07:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:04:59.645 07:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:04:59.645 07:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:04:59.645 rmmod nvme_tcp 00:04:59.645 rmmod nvme_fabrics 00:04:59.645 rmmod nvme_keyring 00:04:59.645 07:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:04:59.645 07:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:04:59.645 07:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:04:59.645 07:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1012490 ']' 00:04:59.645 07:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1012490 00:04:59.645 07:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 1012490 ']' 00:04:59.645 07:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 1012490 00:04:59.645 07:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:04:59.904 07:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:59.904 07:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1012490 00:04:59.904 07:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:04:59.904 07:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:04:59.904 07:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1012490' 00:04:59.904 killing process with pid 1012490 00:04:59.904 07:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@971 -- # kill 1012490 00:04:59.904 07:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@976 -- # wait 1012490 00:04:59.904 07:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:04:59.904 07:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:04:59.904 07:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:04:59.904 07:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:04:59.904 07:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:04:59.904 07:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:04:59.904 07:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:04:59.904 07:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:04:59.904 07:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:04:59.904 07:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:59.904 07:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:59.904 07:01:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:02.438 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:02.438 00:05:02.438 real 0m11.746s 00:05:02.438 user 0m13.461s 00:05:02.438 sys 0m5.399s 00:05:02.438 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:02.438 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:02.438 ************************************ 00:05:02.438 END TEST nvmf_abort 00:05:02.438 ************************************ 00:05:02.438 07:01:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:02.438 07:01:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:02.438 07:01:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:02.438 07:01:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:02.438 ************************************ 00:05:02.438 START TEST nvmf_ns_hotplug_stress 00:05:02.438 ************************************ 00:05:02.438 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:02.438 * Looking for test storage... 00:05:02.438 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:02.438 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:02.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.439 --rc genhtml_branch_coverage=1 00:05:02.439 --rc genhtml_function_coverage=1 00:05:02.439 --rc genhtml_legend=1 00:05:02.439 --rc geninfo_all_blocks=1 00:05:02.439 --rc geninfo_unexecuted_blocks=1 00:05:02.439 00:05:02.439 ' 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:02.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.439 --rc genhtml_branch_coverage=1 00:05:02.439 --rc genhtml_function_coverage=1 00:05:02.439 --rc genhtml_legend=1 00:05:02.439 --rc geninfo_all_blocks=1 00:05:02.439 --rc geninfo_unexecuted_blocks=1 00:05:02.439 00:05:02.439 ' 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:02.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.439 --rc genhtml_branch_coverage=1 00:05:02.439 --rc genhtml_function_coverage=1 00:05:02.439 --rc genhtml_legend=1 00:05:02.439 --rc geninfo_all_blocks=1 00:05:02.439 --rc geninfo_unexecuted_blocks=1 00:05:02.439 00:05:02.439 ' 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:02.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.439 --rc genhtml_branch_coverage=1 00:05:02.439 --rc genhtml_function_coverage=1 00:05:02.439 --rc genhtml_legend=1 00:05:02.439 --rc geninfo_all_blocks=1 00:05:02.439 --rc geninfo_unexecuted_blocks=1 00:05:02.439 00:05:02.439 ' 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:02.439 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:02.440 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:02.440 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:02.440 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:02.440 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:02.440 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:02.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:02.440 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:02.440 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:02.440 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:02.440 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:02.440 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:02.440 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:02.440 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:02.440 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:02.440 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:02.440 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:02.440 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:02.440 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:02.440 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:02.440 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:02.440 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:02.440 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:02.440 07:01:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:09.015 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:09.015 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:09.015 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:09.015 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:09.016 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:09.016 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:09.016 Found net devices under 0000:86:00.0: cvl_0_0 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:09.016 Found net devices under 0000:86:00.1: cvl_0_1 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:09.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:09.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.360 ms 00:05:09.016 00:05:09.016 --- 10.0.0.2 ping statistics --- 00:05:09.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:09.016 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:05:09.016 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:09.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:09.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:05:09.016 00:05:09.016 --- 10.0.0.1 ping statistics --- 00:05:09.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:09.016 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:05:09.017 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:09.017 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:09.017 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:09.017 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:09.017 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:09.017 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:09.017 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:09.017 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:09.017 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:09.017 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:09.017 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:09.017 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:09.017 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:09.017 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1016534 00:05:09.017 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:09.017 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1016534 00:05:09.017 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 1016534 ']' 00:05:09.017 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.017 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:09.017 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.017 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:09.017 07:01:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:09.017 [2024-11-20 07:01:12.855474] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:05:09.017 [2024-11-20 07:01:12.855523] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:09.017 [2024-11-20 07:01:12.937090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:09.017 [2024-11-20 07:01:12.981841] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:09.017 [2024-11-20 07:01:12.981878] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:09.017 [2024-11-20 07:01:12.981886] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:09.017 [2024-11-20 07:01:12.981892] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:09.017 [2024-11-20 07:01:12.981897] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:09.017 [2024-11-20 07:01:12.983302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:09.017 [2024-11-20 07:01:12.983409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.017 [2024-11-20 07:01:12.983411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:09.017 07:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:09.017 07:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:05:09.017 07:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:09.017 07:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:09.017 07:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:09.017 07:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:09.017 07:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:09.017 07:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:09.017 [2024-11-20 07:01:13.306149] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:09.017 07:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:09.017 07:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:09.275 [2024-11-20 07:01:13.711644] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:09.275 07:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:09.533 07:01:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:09.791 Malloc0 00:05:09.791 07:01:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:09.791 Delay0 00:05:09.791 07:01:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:10.049 07:01:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:10.308 NULL1 00:05:10.308 07:01:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:10.566 07:01:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:10.566 07:01:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1017017 00:05:10.566 07:01:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1017017 00:05:10.566 07:01:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:10.824 07:01:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:10.824 07:01:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:10.824 07:01:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:11.083 true 00:05:11.083 07:01:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1017017 00:05:11.083 07:01:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:11.341 07:01:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:11.600 07:01:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:11.600 07:01:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:11.600 true 00:05:11.600 07:01:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1017017 00:05:11.600 07:01:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:12.977 Read completed with error (sct=0, sc=11) 00:05:12.977 07:01:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:12.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:12.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:12.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:12.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:12.977 07:01:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:12.977 07:01:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:13.236 true 00:05:13.236 07:01:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1017017 00:05:13.236 07:01:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:13.495 07:01:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:13.754 07:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:13.754 07:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:13.754 true 00:05:13.754 07:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1017017 00:05:13.754 07:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:14.013 07:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:14.271 07:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:14.271 07:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:14.530 true 00:05:14.530 07:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1017017 00:05:14.530 07:01:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:14.530 07:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:14.792 07:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:14.792 07:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:15.050 true 00:05:15.050 07:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1017017 00:05:15.050 07:01:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:15.985 07:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:16.244 07:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:16.244 07:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:16.502 true 00:05:16.502 07:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1017017 00:05:16.502 07:01:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:16.760 07:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:16.760 07:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:16.760 07:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:17.019 true 00:05:17.019 07:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1017017 00:05:17.019 07:01:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:17.955 07:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:18.213 07:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:18.213 07:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:18.472 true 00:05:18.472 07:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1017017 00:05:18.472 07:01:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:18.731 07:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:18.989 07:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:18.989 07:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:18.989 true 00:05:18.989 07:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1017017 00:05:18.989 07:01:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:20.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.367 07:01:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:20.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.367 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.367 07:01:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:20.367 07:01:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:20.625 true 00:05:20.625 07:01:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1017017 00:05:20.625 07:01:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:21.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:21.559 07:01:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:21.559 07:01:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:21.559 07:01:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:21.817 true 00:05:21.817 07:01:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1017017 00:05:21.817 07:01:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:22.076 07:01:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.076 07:01:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:22.076 07:01:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:22.334 true 00:05:22.334 07:01:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1017017 00:05:22.334 07:01:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:23.708 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.708 07:01:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:23.708 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.708 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.708 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.708 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.708 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.708 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.708 07:01:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:23.708 07:01:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:23.967 true 00:05:23.967 07:01:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1017017 00:05:23.967 07:01:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:24.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.902 07:01:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.902 07:01:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:24.902 07:01:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:25.161 true 00:05:25.161 07:01:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1017017 00:05:25.161 07:01:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:25.419 07:01:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:25.678 07:01:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:25.678 07:01:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:25.939 true 00:05:25.939 07:01:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1017017 00:05:25.939 07:01:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.316 07:01:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.316 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.316 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.316 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.316 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.316 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.316 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.316 07:01:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:27.316 07:01:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:27.575 true 00:05:27.575 07:01:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1017017 00:05:27.575 07:01:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.509 07:01:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.509 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:28.509 07:01:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:28.509 07:01:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:28.767 true 00:05:28.767 07:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1017017 00:05:28.767 07:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.767 07:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.025 07:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:29.025 07:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:29.283 true 00:05:29.283 07:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1017017 00:05:29.283 07:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.542 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.542 07:01:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.542 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.542 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.542 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.542 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.542 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:29.801 07:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:29.801 07:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:29.801 true 00:05:29.801 07:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1017017 00:05:29.801 07:01:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.737 07:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.995 07:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:30.995 07:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:30.995 true 00:05:30.995 07:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1017017 00:05:30.995 07:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.312 07:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.634 07:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:31.634 07:01:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:31.634 true 00:05:31.634 07:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1017017 00:05:31.634 07:01:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.011 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.011 07:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.011 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.011 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.011 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.011 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.011 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:33.011 07:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:33.011 07:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:33.269 true 00:05:33.269 07:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1017017 00:05:33.269 07:01:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.205 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:34.205 07:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.205 07:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:34.205 07:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:34.463 true 00:05:34.463 07:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1017017 00:05:34.463 07:01:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.722 07:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.980 07:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:34.980 07:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:34.980 true 00:05:34.980 07:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1017017 00:05:34.980 07:01:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.357 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:36.357 07:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.357 07:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:36.357 07:01:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:36.614 true 00:05:36.614 07:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1017017 00:05:36.614 07:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.872 07:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.130 07:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:37.130 07:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:37.130 true 00:05:37.130 07:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1017017 00:05:37.130 07:01:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.504 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.504 07:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.504 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.504 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.504 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.504 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.504 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.504 07:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:38.504 07:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:38.762 true 00:05:38.762 07:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1017017 00:05:38.762 07:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.698 07:01:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.698 07:01:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:39.698 07:01:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:39.956 true 00:05:39.956 07:01:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1017017 00:05:39.956 07:01:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.214 07:01:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.214 07:01:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:05:40.214 07:01:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:05:40.473 true 00:05:40.473 07:01:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1017017 00:05:40.473 07:01:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.849 Initializing NVMe Controllers 00:05:41.849 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:41.849 Controller IO queue size 128, less than required. 00:05:41.849 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:41.849 Controller IO queue size 128, less than required. 00:05:41.849 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:41.849 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:41.849 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:41.849 Initialization complete. Launching workers. 00:05:41.849 ======================================================== 00:05:41.849 Latency(us) 00:05:41.849 Device Information : IOPS MiB/s Average min max 00:05:41.849 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1700.80 0.83 46134.09 1579.17 1034506.54 00:05:41.849 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 14884.20 7.27 8577.79 2275.57 381382.84 00:05:41.849 ======================================================== 00:05:41.849 Total : 16585.00 8.10 12429.21 1579.17 1034506.54 00:05:41.849 00:05:41.849 07:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.849 07:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:05:41.849 07:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:05:42.107 true 00:05:42.107 07:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1017017 00:05:42.107 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1017017) - No such process 00:05:42.107 07:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1017017 00:05:42.107 07:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.366 07:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:42.366 07:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:42.366 07:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:42.366 07:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:42.366 07:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:42.366 07:01:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:05:42.624 null0 00:05:42.624 07:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:42.624 07:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:42.624 07:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:05:42.883 null1 00:05:42.883 07:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:42.883 07:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:42.883 07:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:05:43.141 null2 00:05:43.141 07:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:43.141 07:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:43.141 07:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:05:43.141 null3 00:05:43.399 07:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:43.399 07:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:43.399 07:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:05:43.399 null4 00:05:43.399 07:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:43.399 07:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:43.399 07:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:05:43.657 null5 00:05:43.657 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:43.657 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:43.657 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:05:43.915 null6 00:05:43.915 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:43.915 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:43.915 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:05:43.915 null7 00:05:44.173 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:44.173 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:44.173 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:05:44.173 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:44.173 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:44.173 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:44.173 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:05:44.173 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:44.173 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:05:44.173 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:44.173 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.173 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:44.173 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1022634 1022635 1022637 1022639 1022641 1022643 1022644 1022647 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:44.174 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:44.433 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:44.433 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:44.433 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:44.433 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.433 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.433 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:44.433 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.433 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.433 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:44.433 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.433 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.433 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:44.433 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.433 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.433 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:44.433 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.433 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.433 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:44.433 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.433 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.433 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:44.433 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.433 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.433 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:44.433 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.433 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.433 07:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:44.692 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:44.692 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:44.692 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:44.692 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.692 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:44.692 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:44.692 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:44.692 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:44.951 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.951 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.951 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:44.951 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.951 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.951 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:44.951 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.951 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.951 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:44.951 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.952 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.952 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:44.952 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.952 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.952 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:44.952 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.952 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.952 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:44.952 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.952 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.952 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:44.952 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:44.952 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:44.952 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:45.211 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:45.211 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:45.211 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.211 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:45.211 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:45.211 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:45.211 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:45.211 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:45.211 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.211 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.211 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:45.211 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.211 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.211 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:45.211 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.211 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.211 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:45.211 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.211 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.470 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:45.470 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.470 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.470 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:45.470 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.470 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.470 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:45.470 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.470 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.470 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:45.470 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.470 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.470 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:45.470 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:45.470 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:45.470 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:45.470 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:45.470 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:45.470 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:45.470 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:45.470 07:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.728 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.728 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.728 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:45.728 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.728 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.728 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:45.728 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.729 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.729 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:45.729 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.729 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.729 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:45.729 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.729 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.729 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:45.729 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.729 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.729 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:45.729 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.729 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.729 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:45.729 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:45.729 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:45.729 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:45.988 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:45.988 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:45.988 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:45.988 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:45.988 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.988 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:45.988 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:45.988 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:46.247 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.247 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.247 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.247 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.247 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:46.247 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:46.247 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.247 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.247 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:46.247 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.247 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.247 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.247 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.247 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:46.247 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:46.247 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.247 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.247 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.247 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:46.247 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.247 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:46.247 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.247 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.247 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:46.247 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:46.506 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.506 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:46.506 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:46.506 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:46.506 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:46.506 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:46.506 07:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:46.506 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.506 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.506 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:46.506 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.506 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.506 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:46.506 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.506 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.506 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:46.506 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.506 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.506 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:46.506 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.506 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.506 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:46.506 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.506 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.506 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:46.506 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.506 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.506 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:46.506 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:46.506 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:46.506 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:46.767 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.767 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:46.767 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:46.767 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:46.767 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:46.767 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:46.767 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:46.767 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:47.026 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.026 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.026 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:47.026 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.026 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.027 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:47.027 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.027 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.027 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:47.027 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.027 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.027 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:47.027 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.027 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.027 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:47.027 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.027 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.027 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:47.027 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.027 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.027 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.027 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.027 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:47.027 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:47.285 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:47.285 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.285 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:47.285 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:47.285 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:47.285 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:47.285 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:47.285 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:47.544 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.544 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.544 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:47.544 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.544 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.544 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:47.544 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.544 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.544 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:47.544 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.544 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.544 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.544 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.544 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:47.544 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:47.544 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.544 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.544 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:47.544 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.544 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.544 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:47.544 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.544 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.544 07:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:47.544 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:47.544 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:47.544 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:47.544 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.544 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:47.544 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:47.544 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:47.803 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:47.803 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.803 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.803 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:47.803 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.803 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.803 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:47.803 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.803 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.803 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:47.803 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.803 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.803 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:47.803 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.803 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.803 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:47.803 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.803 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.803 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:47.803 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.803 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.803 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:47.803 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:47.803 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:47.803 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:48.062 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:48.062 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:48.062 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:48.062 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:48.062 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:48.062 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:48.062 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:48.062 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.321 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.321 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.321 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.321 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.321 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.321 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.321 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.321 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.321 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.321 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.321 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.321 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.321 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.321 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.321 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:48.321 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:48.321 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:05:48.321 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:05:48.321 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:48.321 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:05:48.321 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:48.321 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:05:48.321 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:48.321 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:48.321 rmmod nvme_tcp 00:05:48.322 rmmod nvme_fabrics 00:05:48.322 rmmod nvme_keyring 00:05:48.322 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:48.322 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:05:48.322 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:05:48.322 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1016534 ']' 00:05:48.322 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1016534 00:05:48.322 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 1016534 ']' 00:05:48.322 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 1016534 00:05:48.322 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:05:48.322 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:48.322 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1016534 00:05:48.322 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:05:48.322 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:05:48.322 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1016534' 00:05:48.322 killing process with pid 1016534 00:05:48.322 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 1016534 00:05:48.322 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 1016534 00:05:48.581 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:48.581 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:48.581 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:48.581 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:05:48.581 07:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:05:48.581 07:01:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:48.581 07:01:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:05:48.581 07:01:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:48.581 07:01:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:48.581 07:01:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:48.581 07:01:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:48.581 07:01:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:51.115 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:51.115 00:05:51.115 real 0m48.499s 00:05:51.115 user 3m18.983s 00:05:51.115 sys 0m16.329s 00:05:51.115 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:51.115 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:51.115 ************************************ 00:05:51.115 END TEST nvmf_ns_hotplug_stress 00:05:51.115 ************************************ 00:05:51.115 07:01:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:51.115 07:01:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:51.115 07:01:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:51.115 07:01:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:51.115 ************************************ 00:05:51.115 START TEST nvmf_delete_subsystem 00:05:51.115 ************************************ 00:05:51.115 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:51.115 * Looking for test storage... 00:05:51.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:51.115 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:51.115 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:05:51.115 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:51.115 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:51.115 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.115 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.115 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.115 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.115 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.115 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.115 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.115 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:51.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.116 --rc genhtml_branch_coverage=1 00:05:51.116 --rc genhtml_function_coverage=1 00:05:51.116 --rc genhtml_legend=1 00:05:51.116 --rc geninfo_all_blocks=1 00:05:51.116 --rc geninfo_unexecuted_blocks=1 00:05:51.116 00:05:51.116 ' 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:51.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.116 --rc genhtml_branch_coverage=1 00:05:51.116 --rc genhtml_function_coverage=1 00:05:51.116 --rc genhtml_legend=1 00:05:51.116 --rc geninfo_all_blocks=1 00:05:51.116 --rc geninfo_unexecuted_blocks=1 00:05:51.116 00:05:51.116 ' 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:51.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.116 --rc genhtml_branch_coverage=1 00:05:51.116 --rc genhtml_function_coverage=1 00:05:51.116 --rc genhtml_legend=1 00:05:51.116 --rc geninfo_all_blocks=1 00:05:51.116 --rc geninfo_unexecuted_blocks=1 00:05:51.116 00:05:51.116 ' 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:51.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.116 --rc genhtml_branch_coverage=1 00:05:51.116 --rc genhtml_function_coverage=1 00:05:51.116 --rc genhtml_legend=1 00:05:51.116 --rc geninfo_all_blocks=1 00:05:51.116 --rc geninfo_unexecuted_blocks=1 00:05:51.116 00:05:51.116 ' 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:51.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:51.116 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:51.117 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:51.117 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:51.117 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:51.117 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:51.117 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:51.117 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:05:51.117 07:01:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.685 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:57.685 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:05:57.685 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:57.685 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:57.685 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:57.685 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:57.685 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:57.685 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:05:57.685 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:57.685 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:05:57.685 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:05:57.685 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:05:57.685 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:05:57.685 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:05:57.685 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:05:57.685 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:57.685 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:57.685 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:57.685 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:57.685 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:57.685 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:57.685 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:57.685 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:57.685 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:57.685 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:57.685 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:57.685 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:57.685 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:57.685 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:57.685 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:57.685 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:57.685 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:57.685 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:57.685 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:57.685 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:57.686 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:57.686 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:57.686 Found net devices under 0000:86:00.0: cvl_0_0 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:57.686 Found net devices under 0000:86:00.1: cvl_0_1 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:57.686 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:57.686 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.378 ms 00:05:57.686 00:05:57.686 --- 10.0.0.2 ping statistics --- 00:05:57.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:57.686 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:57.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:57.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:05:57.686 00:05:57.686 --- 10.0.0.1 ping statistics --- 00:05:57.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:57.686 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1027052 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1027052 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 1027052 ']' 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:57.686 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.686 [2024-11-20 07:02:01.381187] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:05:57.687 [2024-11-20 07:02:01.381232] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:57.687 [2024-11-20 07:02:01.457973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.687 [2024-11-20 07:02:01.501187] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:57.687 [2024-11-20 07:02:01.501223] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:57.687 [2024-11-20 07:02:01.501230] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:57.687 [2024-11-20 07:02:01.501236] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:57.687 [2024-11-20 07:02:01.501242] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:57.687 [2024-11-20 07:02:01.502401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.687 [2024-11-20 07:02:01.502404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.687 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:57.687 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:05:57.687 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:57.687 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:57.687 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.687 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:57.687 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:57.687 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.687 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.687 [2024-11-20 07:02:01.640487] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:57.687 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.687 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:57.687 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.687 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.687 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.687 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:57.687 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.687 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.687 [2024-11-20 07:02:01.660695] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:57.687 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.687 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:05:57.687 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.687 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.687 NULL1 00:05:57.687 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.687 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:57.687 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.687 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.687 Delay0 00:05:57.687 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.687 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.687 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.687 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.687 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.687 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1027282 00:05:57.687 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:05:57.687 07:02:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:05:57.687 [2024-11-20 07:02:01.771644] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:05:59.614 07:02:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:05:59.614 07:02:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.614 07:02:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 starting I/O failed: -6 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 starting I/O failed: -6 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 starting I/O failed: -6 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 starting I/O failed: -6 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 starting I/O failed: -6 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 starting I/O failed: -6 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 starting I/O failed: -6 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 starting I/O failed: -6 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 starting I/O failed: -6 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 starting I/O failed: -6 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 starting I/O failed: -6 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 [2024-11-20 07:02:03.980725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efe1c000c40 is same with the state(6) to be set 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 starting I/O failed: -6 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 starting I/O failed: -6 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 starting I/O failed: -6 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 starting I/O failed: -6 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 starting I/O failed: -6 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 starting I/O failed: -6 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 starting I/O failed: -6 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 starting I/O failed: -6 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 starting I/O failed: -6 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 [2024-11-20 07:02:03.981096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d860 is same with the state(6) to be set 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:05:59.614 Write completed with error (sct=0, sc=8) 00:05:59.614 Read completed with error (sct=0, sc=8) 00:06:00.550 [2024-11-20 07:02:04.949814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e9a0 is same with the state(6) to be set 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Write completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Write completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Write completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Write completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Write completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 [2024-11-20 07:02:04.982234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220d2c0 is same with the state(6) to be set 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Write completed with error (sct=0, sc=8) 00:06:00.550 Write completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Write completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Write completed with error (sct=0, sc=8) 00:06:00.550 Write completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Write completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Write completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Write completed with error (sct=0, sc=8) 00:06:00.550 Write completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Write completed with error (sct=0, sc=8) 00:06:00.550 Write completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 [2024-11-20 07:02:04.982739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efe1c00d020 is same with the state(6) to be set 00:06:00.550 Write completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Write completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Write completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Write completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Write completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Write completed with error (sct=0, sc=8) 00:06:00.550 Write completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.550 Write completed with error (sct=0, sc=8) 00:06:00.550 Write completed with error (sct=0, sc=8) 00:06:00.550 Read completed with error (sct=0, sc=8) 00:06:00.551 Read completed with error (sct=0, sc=8) 00:06:00.551 Write completed with error (sct=0, sc=8) 00:06:00.551 Write completed with error (sct=0, sc=8) 00:06:00.551 Read completed with error (sct=0, sc=8) 00:06:00.551 Read completed with error (sct=0, sc=8) 00:06:00.551 Read completed with error (sct=0, sc=8) 00:06:00.551 Read completed with error (sct=0, sc=8) 00:06:00.551 Write completed with error (sct=0, sc=8) 00:06:00.551 Read completed with error (sct=0, sc=8) 00:06:00.551 Read completed with error (sct=0, sc=8) 00:06:00.551 [2024-11-20 07:02:04.982893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efe1c00d7e0 is same with the state(6) to be set 00:06:00.551 Read completed with error (sct=0, sc=8) 00:06:00.551 Read completed with error (sct=0, sc=8) 00:06:00.551 Read completed with error (sct=0, sc=8) 00:06:00.551 Write completed with error (sct=0, sc=8) 00:06:00.551 Write completed with error (sct=0, sc=8) 00:06:00.551 Read completed with error (sct=0, sc=8) 00:06:00.551 Write completed with error (sct=0, sc=8) 00:06:00.551 Write completed with error (sct=0, sc=8) 00:06:00.551 Read completed with error (sct=0, sc=8) 00:06:00.551 Write completed with error (sct=0, sc=8) 00:06:00.551 Read completed with error (sct=0, sc=8) 00:06:00.551 Write completed with error (sct=0, sc=8) 00:06:00.551 Read completed with error (sct=0, sc=8) 00:06:00.551 Read completed with error (sct=0, sc=8) 00:06:00.551 Read completed with error (sct=0, sc=8) 00:06:00.551 Write completed with error (sct=0, sc=8) 00:06:00.551 Write completed with error (sct=0, sc=8) 00:06:00.551 Read completed with error (sct=0, sc=8) 00:06:00.551 Write completed with error (sct=0, sc=8) 00:06:00.551 Read completed with error (sct=0, sc=8) 00:06:00.551 Read completed with error (sct=0, sc=8) 00:06:00.551 Read completed with error (sct=0, sc=8) 00:06:00.551 Read completed with error (sct=0, sc=8) 00:06:00.551 Read completed with error (sct=0, sc=8) 00:06:00.551 Write completed with error (sct=0, sc=8) 00:06:00.551 Read completed with error (sct=0, sc=8) 00:06:00.551 Read completed with error (sct=0, sc=8) 00:06:00.551 Read completed with error (sct=0, sc=8) 00:06:00.551 Write completed with error (sct=0, sc=8) 00:06:00.551 Read completed with error (sct=0, sc=8) 00:06:00.551 Read completed with error (sct=0, sc=8) 00:06:00.551 Read completed with error (sct=0, sc=8) 00:06:00.551 Write completed with error (sct=0, sc=8) 00:06:00.551 Read completed with error (sct=0, sc=8) 00:06:00.551 Read completed with error (sct=0, sc=8) 00:06:00.551 Write completed with error (sct=0, sc=8) 00:06:00.551 Read completed with error (sct=0, sc=8) 00:06:00.551 Read completed with error (sct=0, sc=8) 00:06:00.551 Write completed with error (sct=0, sc=8) 00:06:00.551 [2024-11-20 07:02:04.983343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7efe1c00d350 is same with the state(6) to be set 00:06:00.551 Initializing NVMe Controllers 00:06:00.551 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:00.551 Controller IO queue size 128, less than required. 00:06:00.551 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:00.551 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:00.551 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:00.551 Initialization complete. Launching workers. 00:06:00.551 ======================================================== 00:06:00.551 Latency(us) 00:06:00.551 Device Information : IOPS MiB/s Average min max 00:06:00.551 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 155.99 0.08 886911.51 256.14 2000943.66 00:06:00.551 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 164.44 0.08 1088524.78 1184.12 2002064.43 00:06:00.551 ======================================================== 00:06:00.551 Total : 320.43 0.16 990375.06 256.14 2002064.43 00:06:00.551 00:06:00.551 [2024-11-20 07:02:04.983832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e9a0 (9): Bad file descriptor 00:06:00.551 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:00.551 07:02:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.551 07:02:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:00.551 07:02:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1027282 00:06:00.551 07:02:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:01.119 07:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:01.119 07:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1027282 00:06:01.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1027282) - No such process 00:06:01.119 07:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1027282 00:06:01.119 07:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:06:01.119 07:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1027282 00:06:01.119 07:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:06:01.119 07:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.119 07:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:06:01.119 07:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.119 07:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1027282 00:06:01.119 07:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:06:01.120 07:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:01.120 07:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:01.120 07:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:01.120 07:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:01.120 07:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.120 07:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:01.120 07:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.120 07:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:01.120 07:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.120 07:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:01.120 [2024-11-20 07:02:05.511119] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:01.120 07:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.120 07:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.120 07:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.120 07:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:01.120 07:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.120 07:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1027778 00:06:01.120 07:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:01.120 07:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:01.120 07:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1027778 00:06:01.120 07:02:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:01.120 [2024-11-20 07:02:05.594884] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:01.690 07:02:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:01.690 07:02:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1027778 00:06:01.690 07:02:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:02.256 07:02:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:02.256 07:02:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1027778 00:06:02.256 07:02:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:02.515 07:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:02.515 07:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1027778 00:06:02.515 07:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:03.081 07:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:03.081 07:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1027778 00:06:03.081 07:02:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:03.647 07:02:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:03.647 07:02:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1027778 00:06:03.647 07:02:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:04.214 07:02:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:04.214 07:02:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1027778 00:06:04.214 07:02:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:04.214 Initializing NVMe Controllers 00:06:04.214 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:04.214 Controller IO queue size 128, less than required. 00:06:04.214 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:04.214 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:04.214 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:04.214 Initialization complete. Launching workers. 00:06:04.214 ======================================================== 00:06:04.214 Latency(us) 00:06:04.214 Device Information : IOPS MiB/s Average min max 00:06:04.214 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002016.26 1000147.34 1005508.14 00:06:04.214 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003733.07 1000166.65 1009759.19 00:06:04.214 ======================================================== 00:06:04.214 Total : 256.00 0.12 1002874.66 1000147.34 1009759.19 00:06:04.214 00:06:04.788 07:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:04.788 07:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1027778 00:06:04.788 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1027778) - No such process 00:06:04.788 07:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1027778 00:06:04.788 07:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:04.788 07:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:04.788 07:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:04.788 07:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:04.788 07:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:04.788 07:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:04.788 07:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:04.788 07:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:04.788 rmmod nvme_tcp 00:06:04.788 rmmod nvme_fabrics 00:06:04.788 rmmod nvme_keyring 00:06:04.788 07:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:04.788 07:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:04.788 07:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:04.788 07:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1027052 ']' 00:06:04.788 07:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1027052 00:06:04.788 07:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 1027052 ']' 00:06:04.788 07:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 1027052 00:06:04.788 07:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:06:04.788 07:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:04.788 07:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1027052 00:06:04.788 07:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:04.788 07:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:04.788 07:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1027052' 00:06:04.788 killing process with pid 1027052 00:06:04.788 07:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 1027052 00:06:04.788 07:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 1027052 00:06:04.788 07:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:04.788 07:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:04.788 07:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:04.788 07:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:04.788 07:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:04.788 07:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:04.788 07:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:05.049 07:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:05.049 07:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:05.049 07:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:05.049 07:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:05.049 07:02:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:06.954 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:06.954 00:06:06.954 real 0m16.259s 00:06:06.954 user 0m29.350s 00:06:06.954 sys 0m5.531s 00:06:06.954 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:06.954 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:06.954 ************************************ 00:06:06.954 END TEST nvmf_delete_subsystem 00:06:06.954 ************************************ 00:06:06.954 07:02:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:06.954 07:02:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:06.954 07:02:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:06.954 07:02:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:06.954 ************************************ 00:06:06.954 START TEST nvmf_host_management 00:06:06.954 ************************************ 00:06:06.954 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:07.213 * Looking for test storage... 00:06:07.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:07.213 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:07.213 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:06:07.213 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:07.213 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:07.213 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.213 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.213 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.213 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.213 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.213 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:07.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.214 --rc genhtml_branch_coverage=1 00:06:07.214 --rc genhtml_function_coverage=1 00:06:07.214 --rc genhtml_legend=1 00:06:07.214 --rc geninfo_all_blocks=1 00:06:07.214 --rc geninfo_unexecuted_blocks=1 00:06:07.214 00:06:07.214 ' 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:07.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.214 --rc genhtml_branch_coverage=1 00:06:07.214 --rc genhtml_function_coverage=1 00:06:07.214 --rc genhtml_legend=1 00:06:07.214 --rc geninfo_all_blocks=1 00:06:07.214 --rc geninfo_unexecuted_blocks=1 00:06:07.214 00:06:07.214 ' 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:07.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.214 --rc genhtml_branch_coverage=1 00:06:07.214 --rc genhtml_function_coverage=1 00:06:07.214 --rc genhtml_legend=1 00:06:07.214 --rc geninfo_all_blocks=1 00:06:07.214 --rc geninfo_unexecuted_blocks=1 00:06:07.214 00:06:07.214 ' 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:07.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.214 --rc genhtml_branch_coverage=1 00:06:07.214 --rc genhtml_function_coverage=1 00:06:07.214 --rc genhtml_legend=1 00:06:07.214 --rc geninfo_all_blocks=1 00:06:07.214 --rc geninfo_unexecuted_blocks=1 00:06:07.214 00:06:07.214 ' 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:07.214 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:07.214 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:07.215 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:07.215 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:07.215 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:07.215 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:07.215 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:07.215 07:02:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:13.785 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:13.785 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:13.785 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:13.785 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:13.785 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:13.785 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:13.785 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:13.785 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:13.785 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:13.785 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:13.786 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:13.786 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:13.786 Found net devices under 0000:86:00.0: cvl_0_0 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:13.786 Found net devices under 0000:86:00.1: cvl_0_1 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:13.786 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:13.786 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.472 ms 00:06:13.786 00:06:13.786 --- 10.0.0.2 ping statistics --- 00:06:13.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:13.786 rtt min/avg/max/mdev = 0.472/0.472/0.472/0.000 ms 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:13.786 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:13.786 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:06:13.786 00:06:13.786 --- 10.0.0.1 ping statistics --- 00:06:13.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:13.786 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:13.786 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:13.787 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:13.787 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:13.787 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:13.787 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:13.787 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:13.787 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:13.787 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:13.787 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:13.787 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:13.787 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:13.787 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1031994 00:06:13.787 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1031994 00:06:13.787 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:13.787 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 1031994 ']' 00:06:13.787 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.787 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:13.787 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.787 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:13.787 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:13.787 [2024-11-20 07:02:17.719645] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:06:13.787 [2024-11-20 07:02:17.719695] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:13.787 [2024-11-20 07:02:17.798271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:13.787 [2024-11-20 07:02:17.843378] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:13.787 [2024-11-20 07:02:17.843413] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:13.787 [2024-11-20 07:02:17.843421] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:13.787 [2024-11-20 07:02:17.843427] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:13.787 [2024-11-20 07:02:17.843432] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:13.787 [2024-11-20 07:02:17.845010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:13.787 [2024-11-20 07:02:17.845117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:13.787 [2024-11-20 07:02:17.845224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.787 [2024-11-20 07:02:17.845225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:13.787 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:13.787 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:06:13.787 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:13.787 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:13.787 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:13.787 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:13.787 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:13.787 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.787 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:13.787 [2024-11-20 07:02:17.979915] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:13.787 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.787 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:13.787 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:13.787 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:13.787 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:13.787 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:13.787 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:13.787 07:02:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.787 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:13.787 Malloc0 00:06:13.787 [2024-11-20 07:02:18.060715] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:13.787 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.787 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:13.787 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:13.787 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:13.787 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1032046 00:06:13.787 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1032046 /var/tmp/bdevperf.sock 00:06:13.787 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 1032046 ']' 00:06:13.787 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:13.787 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:13.787 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:13.787 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:13.787 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:13.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:13.787 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:13.787 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:13.787 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:13.787 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:13.787 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:13.787 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:13.787 { 00:06:13.787 "params": { 00:06:13.787 "name": "Nvme$subsystem", 00:06:13.787 "trtype": "$TEST_TRANSPORT", 00:06:13.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:13.787 "adrfam": "ipv4", 00:06:13.787 "trsvcid": "$NVMF_PORT", 00:06:13.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:13.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:13.787 "hdgst": ${hdgst:-false}, 00:06:13.787 "ddgst": ${ddgst:-false} 00:06:13.787 }, 00:06:13.787 "method": "bdev_nvme_attach_controller" 00:06:13.787 } 00:06:13.787 EOF 00:06:13.787 )") 00:06:13.787 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:13.787 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:13.787 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:13.787 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:13.787 "params": { 00:06:13.787 "name": "Nvme0", 00:06:13.787 "trtype": "tcp", 00:06:13.787 "traddr": "10.0.0.2", 00:06:13.787 "adrfam": "ipv4", 00:06:13.787 "trsvcid": "4420", 00:06:13.787 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:13.787 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:13.787 "hdgst": false, 00:06:13.787 "ddgst": false 00:06:13.787 }, 00:06:13.787 "method": "bdev_nvme_attach_controller" 00:06:13.787 }' 00:06:13.787 [2024-11-20 07:02:18.158473] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:06:13.787 [2024-11-20 07:02:18.158519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1032046 ] 00:06:13.787 [2024-11-20 07:02:18.233595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.787 [2024-11-20 07:02:18.275071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.047 Running I/O for 10 seconds... 00:06:14.047 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:14.047 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:06:14.047 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:14.047 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.047 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:14.047 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.047 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:14.047 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:14.047 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:14.047 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:14.047 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:14.047 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:14.047 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:14.047 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:14.047 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:14.047 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:14.047 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.047 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:14.047 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.047 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=78 00:06:14.047 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 78 -ge 100 ']' 00:06:14.047 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:06:14.306 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:06:14.306 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:14.306 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:14.306 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:14.306 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.306 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:14.306 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.566 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:06:14.566 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:06:14.566 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:14.566 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:14.566 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:14.566 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:14.566 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.566 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:14.566 [2024-11-20 07:02:18.875632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a90e80 is same with the state(6) to be set 00:06:14.566 [2024-11-20 07:02:18.875686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a90e80 is same with the state(6) to be set 00:06:14.566 [2024-11-20 07:02:18.875694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a90e80 is same with the state(6) to be set 00:06:14.566 [2024-11-20 07:02:18.875700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a90e80 is same with the state(6) to be set 00:06:14.566 [2024-11-20 07:02:18.875707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a90e80 is same with the state(6) to be set 00:06:14.566 [2024-11-20 07:02:18.875713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a90e80 is same with the state(6) to be set 00:06:14.566 [2024-11-20 07:02:18.875719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a90e80 is same with the state(6) to be set 00:06:14.566 [2024-11-20 07:02:18.875725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a90e80 is same with the state(6) to be set 00:06:14.566 [2024-11-20 07:02:18.875731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a90e80 is same with the state(6) to be set 00:06:14.566 [2024-11-20 07:02:18.875737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a90e80 is same with the state(6) to be set 00:06:14.566 [2024-11-20 07:02:18.875743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a90e80 is same with the state(6) to be set 00:06:14.566 [2024-11-20 07:02:18.875749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a90e80 is same with the state(6) to be set 00:06:14.566 [2024-11-20 07:02:18.875755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a90e80 is same with the state(6) to be set 00:06:14.566 [2024-11-20 07:02:18.875761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a90e80 is same with the state(6) to be set 00:06:14.566 [2024-11-20 07:02:18.875766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a90e80 is same with the state(6) to be set 00:06:14.566 [2024-11-20 07:02:18.875772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a90e80 is same with the state(6) to be set 00:06:14.566 [2024-11-20 07:02:18.875778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a90e80 is same with the state(6) to be set 00:06:14.566 [2024-11-20 07:02:18.875784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a90e80 is same with the state(6) to be set 00:06:14.566 [2024-11-20 07:02:18.875790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a90e80 is same with the state(6) to be set 00:06:14.566 [2024-11-20 07:02:18.875911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.566 [2024-11-20 07:02:18.875951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.566 [2024-11-20 07:02:18.875968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.566 [2024-11-20 07:02:18.875975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.566 [2024-11-20 07:02:18.875984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.566 [2024-11-20 07:02:18.875996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.566 [2024-11-20 07:02:18.876004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.566 [2024-11-20 07:02:18.876011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.566 [2024-11-20 07:02:18.876019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.566 [2024-11-20 07:02:18.876026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.566 [2024-11-20 07:02:18.876034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.566 [2024-11-20 07:02:18.876041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.566 [2024-11-20 07:02:18.876049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.566 [2024-11-20 07:02:18.876056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.566 [2024-11-20 07:02:18.876064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.566 [2024-11-20 07:02:18.876071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.566 [2024-11-20 07:02:18.876079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.566 [2024-11-20 07:02:18.876086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.566 [2024-11-20 07:02:18.876094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.566 [2024-11-20 07:02:18.876101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.566 [2024-11-20 07:02:18.876109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.566 [2024-11-20 07:02:18.876116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.566 [2024-11-20 07:02:18.876124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.566 [2024-11-20 07:02:18.876131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.566 [2024-11-20 07:02:18.876139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.566 [2024-11-20 07:02:18.876146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.566 [2024-11-20 07:02:18.876154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.566 [2024-11-20 07:02:18.876161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.566 [2024-11-20 07:02:18.876169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.566 [2024-11-20 07:02:18.876175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.566 [2024-11-20 07:02:18.876185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.566 [2024-11-20 07:02:18.876192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.566 [2024-11-20 07:02:18.876201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.566 [2024-11-20 07:02:18.876208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.566 [2024-11-20 07:02:18.876216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.566 [2024-11-20 07:02:18.876223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.567 [2024-11-20 07:02:18.876231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.567 [2024-11-20 07:02:18.876238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.567 [2024-11-20 07:02:18.876246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.567 [2024-11-20 07:02:18.876252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.567 [2024-11-20 07:02:18.876260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.567 [2024-11-20 07:02:18.876267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.567 [2024-11-20 07:02:18.876275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.567 [2024-11-20 07:02:18.876282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.567 [2024-11-20 07:02:18.876290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.567 [2024-11-20 07:02:18.876297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.567 [2024-11-20 07:02:18.876305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.567 [2024-11-20 07:02:18.876311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.567 [2024-11-20 07:02:18.876319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.567 [2024-11-20 07:02:18.876326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.567 [2024-11-20 07:02:18.876334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.567 [2024-11-20 07:02:18.876341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.567 [2024-11-20 07:02:18.876349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.567 [2024-11-20 07:02:18.876355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.567 [2024-11-20 07:02:18.876363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.567 [2024-11-20 07:02:18.876372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.567 [2024-11-20 07:02:18.876380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.567 [2024-11-20 07:02:18.876386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.567 [2024-11-20 07:02:18.876394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.567 [2024-11-20 07:02:18.876401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.567 [2024-11-20 07:02:18.876409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.567 [2024-11-20 07:02:18.876415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.567 [2024-11-20 07:02:18.876423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.567 [2024-11-20 07:02:18.876430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.567 [2024-11-20 07:02:18.876438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.567 [2024-11-20 07:02:18.876445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.567 [2024-11-20 07:02:18.876452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.567 [2024-11-20 07:02:18.876459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.567 [2024-11-20 07:02:18.876467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.567 [2024-11-20 07:02:18.876474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.567 [2024-11-20 07:02:18.876482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.567 [2024-11-20 07:02:18.876488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.567 [2024-11-20 07:02:18.876496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.567 [2024-11-20 07:02:18.876503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.567 [2024-11-20 07:02:18.876512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.567 [2024-11-20 07:02:18.876519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.567 [2024-11-20 07:02:18.876526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.567 [2024-11-20 07:02:18.876533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.567 [2024-11-20 07:02:18.876541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.567 [2024-11-20 07:02:18.876548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.567 [2024-11-20 07:02:18.876561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.567 [2024-11-20 07:02:18.876568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.567 [2024-11-20 07:02:18.876576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.567 [2024-11-20 07:02:18.876583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.567 [2024-11-20 07:02:18.876591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.567 [2024-11-20 07:02:18.876597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.567 [2024-11-20 07:02:18.876605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.567 [2024-11-20 07:02:18.876612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.567 [2024-11-20 07:02:18.876620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.567 [2024-11-20 07:02:18.876626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.567 [2024-11-20 07:02:18.876634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.567 [2024-11-20 07:02:18.876641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.567 [2024-11-20 07:02:18.876649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.567 [2024-11-20 07:02:18.876655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.567 [2024-11-20 07:02:18.876663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.567 [2024-11-20 07:02:18.876670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.567 [2024-11-20 07:02:18.876678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.567 [2024-11-20 07:02:18.876687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.567 [2024-11-20 07:02:18.876695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.567 [2024-11-20 07:02:18.876702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.567 [2024-11-20 07:02:18.876710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.567 [2024-11-20 07:02:18.876717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.567 [2024-11-20 07:02:18.876725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.567 [2024-11-20 07:02:18.876732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.567 [2024-11-20 07:02:18.876740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.567 [2024-11-20 07:02:18.876748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.567 [2024-11-20 07:02:18.876757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.567 [2024-11-20 07:02:18.876763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.567 [2024-11-20 07:02:18.876771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.567 [2024-11-20 07:02:18.876778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.567 [2024-11-20 07:02:18.876786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.567 [2024-11-20 07:02:18.876793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.568 [2024-11-20 07:02:18.876801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.568 [2024-11-20 07:02:18.876808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.568 [2024-11-20 07:02:18.876816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.568 [2024-11-20 07:02:18.876823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.568 [2024-11-20 07:02:18.876831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.568 [2024-11-20 07:02:18.876837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.568 [2024-11-20 07:02:18.876845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.568 [2024-11-20 07:02:18.876852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.568 [2024-11-20 07:02:18.876860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.568 [2024-11-20 07:02:18.876866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.568 [2024-11-20 07:02:18.876874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.568 [2024-11-20 07:02:18.876881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.568 [2024-11-20 07:02:18.876889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.568 [2024-11-20 07:02:18.876896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.568 [2024-11-20 07:02:18.876904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:14.568 [2024-11-20 07:02:18.876910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.568 [2024-11-20 07:02:18.876936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:06:14.568 [2024-11-20 07:02:18.877028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:14.568 [2024-11-20 07:02:18.877041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.568 [2024-11-20 07:02:18.877048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:14.568 [2024-11-20 07:02:18.877055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.568 [2024-11-20 07:02:18.877062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:14.568 [2024-11-20 07:02:18.877069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.568 [2024-11-20 07:02:18.877076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:14.568 [2024-11-20 07:02:18.877083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.568 [2024-11-20 07:02:18.877089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xca6500 is same with the state(6) to be set 00:06:14.568 [2024-11-20 07:02:18.877970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:14.568 task offset: 98304 on job bdev=Nvme0n1 fails 00:06:14.568 00:06:14.568 Latency(us) 00:06:14.568 [2024-11-20T06:02:19.124Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:14.568 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:14.568 Job: Nvme0n1 ended in about 0.40 seconds with error 00:06:14.568 Verification LBA range: start 0x0 length 0x400 00:06:14.568 Nvme0n1 : 0.40 1923.66 120.23 160.30 0.00 29855.17 2236.77 27924.03 00:06:14.568 [2024-11-20T06:02:19.124Z] =================================================================================================================== 00:06:14.568 [2024-11-20T06:02:19.124Z] Total : 1923.66 120.23 160.30 0.00 29855.17 2236.77 27924.03 00:06:14.568 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.568 [2024-11-20 07:02:18.880360] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:14.568 [2024-11-20 07:02:18.880384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6500 (9): Bad file descriptor 00:06:14.568 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:14.568 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.568 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:14.568 [2024-11-20 07:02:18.883238] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:06:14.568 [2024-11-20 07:02:18.883315] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:06:14.568 [2024-11-20 07:02:18.883338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:14.568 [2024-11-20 07:02:18.883352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:06:14.568 [2024-11-20 07:02:18.883360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:06:14.568 [2024-11-20 07:02:18.883367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:06:14.568 [2024-11-20 07:02:18.883373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xca6500 00:06:14.568 [2024-11-20 07:02:18.883395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca6500 (9): Bad file descriptor 00:06:14.568 [2024-11-20 07:02:18.883406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:06:14.568 [2024-11-20 07:02:18.883414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:06:14.568 [2024-11-20 07:02:18.883422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:06:14.568 [2024-11-20 07:02:18.883429] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:06:14.568 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.568 07:02:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:15.518 07:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1032046 00:06:15.518 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1032046) - No such process 00:06:15.518 07:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:15.518 07:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:15.518 07:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:15.518 07:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:15.518 07:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:15.518 07:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:15.518 07:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:15.518 07:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:15.518 { 00:06:15.518 "params": { 00:06:15.518 "name": "Nvme$subsystem", 00:06:15.518 "trtype": "$TEST_TRANSPORT", 00:06:15.518 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:15.518 "adrfam": "ipv4", 00:06:15.518 "trsvcid": "$NVMF_PORT", 00:06:15.518 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:15.518 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:15.518 "hdgst": ${hdgst:-false}, 00:06:15.518 "ddgst": ${ddgst:-false} 00:06:15.518 }, 00:06:15.518 "method": "bdev_nvme_attach_controller" 00:06:15.518 } 00:06:15.518 EOF 00:06:15.518 )") 00:06:15.518 07:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:15.518 07:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:15.518 07:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:15.518 07:02:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:15.518 "params": { 00:06:15.518 "name": "Nvme0", 00:06:15.518 "trtype": "tcp", 00:06:15.518 "traddr": "10.0.0.2", 00:06:15.518 "adrfam": "ipv4", 00:06:15.518 "trsvcid": "4420", 00:06:15.518 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:15.518 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:15.518 "hdgst": false, 00:06:15.518 "ddgst": false 00:06:15.518 }, 00:06:15.518 "method": "bdev_nvme_attach_controller" 00:06:15.518 }' 00:06:15.518 [2024-11-20 07:02:19.947657] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:06:15.518 [2024-11-20 07:02:19.947707] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1032505 ] 00:06:15.518 [2024-11-20 07:02:20.024166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.777 [2024-11-20 07:02:20.073165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.036 Running I/O for 1 seconds... 00:06:16.971 1984.00 IOPS, 124.00 MiB/s 00:06:16.971 Latency(us) 00:06:16.971 [2024-11-20T06:02:21.527Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:16.971 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:16.971 Verification LBA range: start 0x0 length 0x400 00:06:16.971 Nvme0n1 : 1.02 2016.11 126.01 0.00 0.00 31241.18 5128.90 27696.08 00:06:16.971 [2024-11-20T06:02:21.527Z] =================================================================================================================== 00:06:16.971 [2024-11-20T06:02:21.527Z] Total : 2016.11 126.01 0.00 0.00 31241.18 5128.90 27696.08 00:06:17.229 07:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:17.229 07:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:17.229 07:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:17.229 07:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:17.229 07:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:17.229 07:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:17.229 07:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:17.229 07:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:17.229 07:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:17.229 07:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:17.229 07:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:17.229 rmmod nvme_tcp 00:06:17.229 rmmod nvme_fabrics 00:06:17.229 rmmod nvme_keyring 00:06:17.229 07:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:17.229 07:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:17.229 07:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:17.229 07:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1031994 ']' 00:06:17.229 07:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1031994 00:06:17.229 07:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 1031994 ']' 00:06:17.229 07:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 1031994 00:06:17.229 07:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:06:17.230 07:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:17.230 07:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1031994 00:06:17.230 07:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:06:17.230 07:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:06:17.230 07:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1031994' 00:06:17.230 killing process with pid 1031994 00:06:17.230 07:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 1031994 00:06:17.230 07:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 1031994 00:06:17.488 [2024-11-20 07:02:21.894143] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:17.488 07:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:17.488 07:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:17.488 07:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:17.488 07:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:17.488 07:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:17.488 07:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:17.488 07:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:17.488 07:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:17.488 07:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:17.488 07:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:17.488 07:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:17.488 07:02:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:20.023 07:02:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:20.023 07:02:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:20.023 00:06:20.023 real 0m12.513s 00:06:20.023 user 0m20.379s 00:06:20.023 sys 0m5.509s 00:06:20.023 07:02:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:20.023 07:02:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:20.023 ************************************ 00:06:20.023 END TEST nvmf_host_management 00:06:20.023 ************************************ 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:20.023 ************************************ 00:06:20.023 START TEST nvmf_lvol 00:06:20.023 ************************************ 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:20.023 * Looking for test storage... 00:06:20.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:20.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.023 --rc genhtml_branch_coverage=1 00:06:20.023 --rc genhtml_function_coverage=1 00:06:20.023 --rc genhtml_legend=1 00:06:20.023 --rc geninfo_all_blocks=1 00:06:20.023 --rc geninfo_unexecuted_blocks=1 00:06:20.023 00:06:20.023 ' 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:20.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.023 --rc genhtml_branch_coverage=1 00:06:20.023 --rc genhtml_function_coverage=1 00:06:20.023 --rc genhtml_legend=1 00:06:20.023 --rc geninfo_all_blocks=1 00:06:20.023 --rc geninfo_unexecuted_blocks=1 00:06:20.023 00:06:20.023 ' 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:20.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.023 --rc genhtml_branch_coverage=1 00:06:20.023 --rc genhtml_function_coverage=1 00:06:20.023 --rc genhtml_legend=1 00:06:20.023 --rc geninfo_all_blocks=1 00:06:20.023 --rc geninfo_unexecuted_blocks=1 00:06:20.023 00:06:20.023 ' 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:20.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.023 --rc genhtml_branch_coverage=1 00:06:20.023 --rc genhtml_function_coverage=1 00:06:20.023 --rc genhtml_legend=1 00:06:20.023 --rc geninfo_all_blocks=1 00:06:20.023 --rc geninfo_unexecuted_blocks=1 00:06:20.023 00:06:20.023 ' 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:20.023 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.024 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.024 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.024 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:20.024 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.024 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:20.024 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:20.024 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:20.024 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:20.024 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:20.024 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:20.024 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:20.024 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:20.024 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:20.024 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:20.024 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:20.024 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:20.024 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:20.024 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:20.024 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:20.024 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:20.024 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:20.024 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:20.024 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:20.024 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:20.024 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:20.024 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:20.024 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:20.024 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:20.024 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:20.024 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:20.024 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:20.024 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:20.024 07:02:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:26.732 07:02:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:26.732 07:02:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:26.732 07:02:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:26.732 07:02:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:26.732 07:02:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:26.732 07:02:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:26.732 07:02:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:26.732 07:02:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:26.732 07:02:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:26.732 07:02:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:26.732 07:02:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:26.732 07:02:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:26.732 07:02:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:26.733 07:02:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:26.733 07:02:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:26.733 07:02:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:26.733 07:02:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:26.733 07:02:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:26.733 07:02:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:26.733 07:02:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:26.733 07:02:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:26.733 07:02:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:26.733 07:02:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:26.733 07:02:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:26.733 07:02:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:26.733 07:02:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:26.733 07:02:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:26.733 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:26.733 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:26.733 Found net devices under 0000:86:00.0: cvl_0_0 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:26.733 Found net devices under 0000:86:00.1: cvl_0_1 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:26.733 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:26.733 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.418 ms 00:06:26.733 00:06:26.733 --- 10.0.0.2 ping statistics --- 00:06:26.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:26.733 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:26.733 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:26.733 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:06:26.733 00:06:26.733 --- 10.0.0.1 ping statistics --- 00:06:26.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:26.733 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1036294 00:06:26.733 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1036294 00:06:26.734 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 1036294 ']' 00:06:26.734 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:26.734 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.734 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:26.734 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.734 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:26.734 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:26.734 [2024-11-20 07:02:30.340326] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:06:26.734 [2024-11-20 07:02:30.340375] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:26.734 [2024-11-20 07:02:30.419373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:26.734 [2024-11-20 07:02:30.463745] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:26.734 [2024-11-20 07:02:30.463779] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:26.734 [2024-11-20 07:02:30.463790] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:26.734 [2024-11-20 07:02:30.463796] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:26.734 [2024-11-20 07:02:30.463802] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:26.734 [2024-11-20 07:02:30.465238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.734 [2024-11-20 07:02:30.465345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.734 [2024-11-20 07:02:30.465346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.734 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:26.734 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:06:26.734 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:26.734 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:26.734 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:26.734 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:26.734 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:26.734 [2024-11-20 07:02:30.764889] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:26.734 07:02:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:26.734 07:02:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:26.734 07:02:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:26.734 07:02:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:26.734 07:02:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:26.992 07:02:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:27.251 07:02:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=8d416de3-313d-406a-aba1-c05733c33207 00:06:27.251 07:02:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8d416de3-313d-406a-aba1-c05733c33207 lvol 20 00:06:27.509 07:02:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=018aef32-c91b-4baa-9219-22b70234e24a 00:06:27.509 07:02:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:27.509 07:02:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 018aef32-c91b-4baa-9219-22b70234e24a 00:06:27.767 07:02:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:28.025 [2024-11-20 07:02:32.430218] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:28.025 07:02:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:28.283 07:02:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1036780 00:06:28.283 07:02:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:28.283 07:02:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:29.216 07:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 018aef32-c91b-4baa-9219-22b70234e24a MY_SNAPSHOT 00:06:29.474 07:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=3c554ca5-b0f9-4d16-bcc1-064ae5e482cb 00:06:29.474 07:02:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 018aef32-c91b-4baa-9219-22b70234e24a 30 00:06:29.732 07:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 3c554ca5-b0f9-4d16-bcc1-064ae5e482cb MY_CLONE 00:06:29.990 07:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=4f1a3ada-063d-4c41-8ba1-e1f4cb216913 00:06:29.990 07:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 4f1a3ada-063d-4c41-8ba1-e1f4cb216913 00:06:30.557 07:02:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1036780 00:06:38.671 Initializing NVMe Controllers 00:06:38.671 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:38.671 Controller IO queue size 128, less than required. 00:06:38.671 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:38.671 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:38.671 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:38.671 Initialization complete. Launching workers. 00:06:38.671 ======================================================== 00:06:38.671 Latency(us) 00:06:38.671 Device Information : IOPS MiB/s Average min max 00:06:38.671 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12124.80 47.36 10561.23 1283.22 59650.55 00:06:38.671 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11970.80 46.76 10695.76 3565.88 49979.72 00:06:38.671 ======================================================== 00:06:38.671 Total : 24095.60 94.12 10628.06 1283.22 59650.55 00:06:38.671 00:06:38.671 07:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:38.929 07:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 018aef32-c91b-4baa-9219-22b70234e24a 00:06:39.187 07:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8d416de3-313d-406a-aba1-c05733c33207 00:06:39.446 07:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:39.446 07:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:39.446 07:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:39.446 07:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:39.446 07:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:39.446 07:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:39.446 07:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:39.446 07:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:39.446 07:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:39.446 rmmod nvme_tcp 00:06:39.446 rmmod nvme_fabrics 00:06:39.446 rmmod nvme_keyring 00:06:39.446 07:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:39.446 07:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:39.446 07:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:39.446 07:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1036294 ']' 00:06:39.446 07:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1036294 00:06:39.446 07:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 1036294 ']' 00:06:39.446 07:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 1036294 00:06:39.446 07:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:06:39.446 07:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:39.447 07:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1036294 00:06:39.447 07:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:39.447 07:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:39.447 07:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1036294' 00:06:39.447 killing process with pid 1036294 00:06:39.447 07:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 1036294 00:06:39.447 07:02:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 1036294 00:06:39.705 07:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:39.705 07:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:39.705 07:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:39.705 07:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:39.705 07:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:39.705 07:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:06:39.705 07:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:06:39.705 07:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:39.705 07:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:39.705 07:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:39.705 07:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:39.705 07:02:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:41.609 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:41.609 00:06:41.609 real 0m22.078s 00:06:41.609 user 1m3.298s 00:06:41.609 sys 0m7.812s 00:06:41.610 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:41.610 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:41.610 ************************************ 00:06:41.610 END TEST nvmf_lvol 00:06:41.610 ************************************ 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:41.869 ************************************ 00:06:41.869 START TEST nvmf_lvs_grow 00:06:41.869 ************************************ 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:41.869 * Looking for test storage... 00:06:41.869 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:41.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.869 --rc genhtml_branch_coverage=1 00:06:41.869 --rc genhtml_function_coverage=1 00:06:41.869 --rc genhtml_legend=1 00:06:41.869 --rc geninfo_all_blocks=1 00:06:41.869 --rc geninfo_unexecuted_blocks=1 00:06:41.869 00:06:41.869 ' 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:41.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.869 --rc genhtml_branch_coverage=1 00:06:41.869 --rc genhtml_function_coverage=1 00:06:41.869 --rc genhtml_legend=1 00:06:41.869 --rc geninfo_all_blocks=1 00:06:41.869 --rc geninfo_unexecuted_blocks=1 00:06:41.869 00:06:41.869 ' 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:41.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.869 --rc genhtml_branch_coverage=1 00:06:41.869 --rc genhtml_function_coverage=1 00:06:41.869 --rc genhtml_legend=1 00:06:41.869 --rc geninfo_all_blocks=1 00:06:41.869 --rc geninfo_unexecuted_blocks=1 00:06:41.869 00:06:41.869 ' 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:41.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.869 --rc genhtml_branch_coverage=1 00:06:41.869 --rc genhtml_function_coverage=1 00:06:41.869 --rc genhtml_legend=1 00:06:41.869 --rc geninfo_all_blocks=1 00:06:41.869 --rc geninfo_unexecuted_blocks=1 00:06:41.869 00:06:41.869 ' 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:41.869 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:41.870 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:41.870 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:41.870 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:41.870 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:41.870 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:41.870 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:41.870 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:41.870 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:06:41.870 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:41.870 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:41.870 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:41.870 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.870 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.870 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.870 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:41.870 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.870 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:06:41.870 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:41.870 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:41.870 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:41.870 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:41.870 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:41.870 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:41.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:41.870 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:41.870 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:41.870 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:41.870 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:41.870 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:41.870 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:41.870 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:41.870 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:41.870 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:41.870 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:41.870 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:41.870 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:42.129 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:42.129 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:42.129 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:42.129 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:42.129 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:06:42.129 07:02:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:48.698 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:48.698 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:48.698 Found net devices under 0000:86:00.0: cvl_0_0 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:48.698 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:48.698 Found net devices under 0000:86:00.1: cvl_0_1 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:48.699 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:48.699 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.480 ms 00:06:48.699 00:06:48.699 --- 10.0.0.2 ping statistics --- 00:06:48.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:48.699 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:48.699 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:48.699 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:06:48.699 00:06:48.699 --- 10.0.0.1 ping statistics --- 00:06:48.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:48.699 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1042169 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1042169 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 1042169 ']' 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:48.699 [2024-11-20 07:02:52.456776] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:06:48.699 [2024-11-20 07:02:52.456826] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:48.699 [2024-11-20 07:02:52.535587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.699 [2024-11-20 07:02:52.579555] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:48.699 [2024-11-20 07:02:52.579592] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:48.699 [2024-11-20 07:02:52.579599] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:48.699 [2024-11-20 07:02:52.579605] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:48.699 [2024-11-20 07:02:52.579610] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:48.699 [2024-11-20 07:02:52.580126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:48.699 [2024-11-20 07:02:52.882068] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:48.699 ************************************ 00:06:48.699 START TEST lvs_grow_clean 00:06:48.699 ************************************ 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:48.699 07:02:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:48.699 07:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:48.699 07:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:48.958 07:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=0fe69992-fdfc-46e7-9357-7537ae66779b 00:06:48.958 07:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fe69992-fdfc-46e7-9357-7537ae66779b 00:06:48.958 07:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:49.217 07:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:49.217 07:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:49.217 07:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0fe69992-fdfc-46e7-9357-7537ae66779b lvol 150 00:06:49.217 07:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=e14d393b-a5d7-4552-b68f-f5624a436e9f 00:06:49.217 07:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:49.476 07:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:49.476 [2024-11-20 07:02:53.936880] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:49.476 [2024-11-20 07:02:53.936929] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:49.476 true 00:06:49.476 07:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fe69992-fdfc-46e7-9357-7537ae66779b 00:06:49.476 07:02:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:49.734 07:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:49.734 07:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:49.993 07:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e14d393b-a5d7-4552-b68f-f5624a436e9f 00:06:49.993 07:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:50.252 [2024-11-20 07:02:54.679109] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:50.252 07:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:50.511 07:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1042668 00:06:50.511 07:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:50.511 07:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:50.511 07:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1042668 /var/tmp/bdevperf.sock 00:06:50.511 07:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 1042668 ']' 00:06:50.511 07:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:50.511 07:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:50.511 07:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:50.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:50.511 07:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:50.511 07:02:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:50.511 [2024-11-20 07:02:54.921048] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:06:50.511 [2024-11-20 07:02:54.921094] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1042668 ] 00:06:50.511 [2024-11-20 07:02:54.992872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.511 [2024-11-20 07:02:55.033410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.769 07:02:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:50.769 07:02:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:06:50.769 07:02:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:51.028 Nvme0n1 00:06:51.028 07:02:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:51.028 [ 00:06:51.028 { 00:06:51.028 "name": "Nvme0n1", 00:06:51.028 "aliases": [ 00:06:51.028 "e14d393b-a5d7-4552-b68f-f5624a436e9f" 00:06:51.028 ], 00:06:51.028 "product_name": "NVMe disk", 00:06:51.028 "block_size": 4096, 00:06:51.028 "num_blocks": 38912, 00:06:51.028 "uuid": "e14d393b-a5d7-4552-b68f-f5624a436e9f", 00:06:51.028 "numa_id": 1, 00:06:51.028 "assigned_rate_limits": { 00:06:51.028 "rw_ios_per_sec": 0, 00:06:51.028 "rw_mbytes_per_sec": 0, 00:06:51.028 "r_mbytes_per_sec": 0, 00:06:51.028 "w_mbytes_per_sec": 0 00:06:51.028 }, 00:06:51.028 "claimed": false, 00:06:51.028 "zoned": false, 00:06:51.028 "supported_io_types": { 00:06:51.028 "read": true, 00:06:51.028 "write": true, 00:06:51.028 "unmap": true, 00:06:51.028 "flush": true, 00:06:51.028 "reset": true, 00:06:51.028 "nvme_admin": true, 00:06:51.028 "nvme_io": true, 00:06:51.028 "nvme_io_md": false, 00:06:51.028 "write_zeroes": true, 00:06:51.028 "zcopy": false, 00:06:51.028 "get_zone_info": false, 00:06:51.028 "zone_management": false, 00:06:51.028 "zone_append": false, 00:06:51.028 "compare": true, 00:06:51.028 "compare_and_write": true, 00:06:51.028 "abort": true, 00:06:51.028 "seek_hole": false, 00:06:51.028 "seek_data": false, 00:06:51.028 "copy": true, 00:06:51.028 "nvme_iov_md": false 00:06:51.028 }, 00:06:51.028 "memory_domains": [ 00:06:51.028 { 00:06:51.028 "dma_device_id": "system", 00:06:51.028 "dma_device_type": 1 00:06:51.028 } 00:06:51.028 ], 00:06:51.028 "driver_specific": { 00:06:51.028 "nvme": [ 00:06:51.028 { 00:06:51.028 "trid": { 00:06:51.028 "trtype": "TCP", 00:06:51.028 "adrfam": "IPv4", 00:06:51.028 "traddr": "10.0.0.2", 00:06:51.028 "trsvcid": "4420", 00:06:51.028 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:51.028 }, 00:06:51.028 "ctrlr_data": { 00:06:51.028 "cntlid": 1, 00:06:51.028 "vendor_id": "0x8086", 00:06:51.028 "model_number": "SPDK bdev Controller", 00:06:51.028 "serial_number": "SPDK0", 00:06:51.028 "firmware_revision": "25.01", 00:06:51.028 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:51.028 "oacs": { 00:06:51.028 "security": 0, 00:06:51.028 "format": 0, 00:06:51.028 "firmware": 0, 00:06:51.028 "ns_manage": 0 00:06:51.028 }, 00:06:51.028 "multi_ctrlr": true, 00:06:51.028 "ana_reporting": false 00:06:51.028 }, 00:06:51.028 "vs": { 00:06:51.028 "nvme_version": "1.3" 00:06:51.028 }, 00:06:51.028 "ns_data": { 00:06:51.028 "id": 1, 00:06:51.028 "can_share": true 00:06:51.028 } 00:06:51.028 } 00:06:51.028 ], 00:06:51.028 "mp_policy": "active_passive" 00:06:51.028 } 00:06:51.028 } 00:06:51.028 ] 00:06:51.287 07:02:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1042684 00:06:51.287 07:02:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:51.287 07:02:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:51.287 Running I/O for 10 seconds... 00:06:52.222 Latency(us) 00:06:52.222 [2024-11-20T06:02:56.778Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:52.222 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:52.222 Nvme0n1 : 1.00 22582.00 88.21 0.00 0.00 0.00 0.00 0.00 00:06:52.222 [2024-11-20T06:02:56.778Z] =================================================================================================================== 00:06:52.222 [2024-11-20T06:02:56.778Z] Total : 22582.00 88.21 0.00 0.00 0.00 0.00 0.00 00:06:52.222 00:06:53.154 07:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0fe69992-fdfc-46e7-9357-7537ae66779b 00:06:53.154 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:53.154 Nvme0n1 : 2.00 22721.00 88.75 0.00 0.00 0.00 0.00 0.00 00:06:53.154 [2024-11-20T06:02:57.710Z] =================================================================================================================== 00:06:53.154 [2024-11-20T06:02:57.710Z] Total : 22721.00 88.75 0.00 0.00 0.00 0.00 0.00 00:06:53.154 00:06:53.411 true 00:06:53.411 07:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fe69992-fdfc-46e7-9357-7537ae66779b 00:06:53.411 07:02:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:06:53.670 07:02:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:06:53.670 07:02:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:06:53.670 07:02:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1042684 00:06:54.236 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:54.236 Nvme0n1 : 3.00 22768.33 88.94 0.00 0.00 0.00 0.00 0.00 00:06:54.236 [2024-11-20T06:02:58.792Z] =================================================================================================================== 00:06:54.236 [2024-11-20T06:02:58.792Z] Total : 22768.33 88.94 0.00 0.00 0.00 0.00 0.00 00:06:54.236 00:06:55.172 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:55.172 Nvme0n1 : 4.00 22826.00 89.16 0.00 0.00 0.00 0.00 0.00 00:06:55.172 [2024-11-20T06:02:59.728Z] =================================================================================================================== 00:06:55.172 [2024-11-20T06:02:59.728Z] Total : 22826.00 89.16 0.00 0.00 0.00 0.00 0.00 00:06:55.172 00:06:56.567 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:56.567 Nvme0n1 : 5.00 22787.80 89.01 0.00 0.00 0.00 0.00 0.00 00:06:56.567 [2024-11-20T06:03:01.123Z] =================================================================================================================== 00:06:56.567 [2024-11-20T06:03:01.123Z] Total : 22787.80 89.01 0.00 0.00 0.00 0.00 0.00 00:06:56.567 00:06:57.503 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:57.503 Nvme0n1 : 6.00 22789.50 89.02 0.00 0.00 0.00 0.00 0.00 00:06:57.503 [2024-11-20T06:03:02.059Z] =================================================================================================================== 00:06:57.503 [2024-11-20T06:03:02.059Z] Total : 22789.50 89.02 0.00 0.00 0.00 0.00 0.00 00:06:57.503 00:06:58.438 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:58.438 Nvme0n1 : 7.00 22784.71 89.00 0.00 0.00 0.00 0.00 0.00 00:06:58.438 [2024-11-20T06:03:02.994Z] =================================================================================================================== 00:06:58.438 [2024-11-20T06:03:02.994Z] Total : 22784.71 89.00 0.00 0.00 0.00 0.00 0.00 00:06:58.438 00:06:59.374 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:59.374 Nvme0n1 : 8.00 22767.88 88.94 0.00 0.00 0.00 0.00 0.00 00:06:59.374 [2024-11-20T06:03:03.930Z] =================================================================================================================== 00:06:59.374 [2024-11-20T06:03:03.930Z] Total : 22767.88 88.94 0.00 0.00 0.00 0.00 0.00 00:06:59.374 00:07:00.310 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:00.310 Nvme0n1 : 9.00 22752.56 88.88 0.00 0.00 0.00 0.00 0.00 00:07:00.310 [2024-11-20T06:03:04.866Z] =================================================================================================================== 00:07:00.310 [2024-11-20T06:03:04.866Z] Total : 22752.56 88.88 0.00 0.00 0.00 0.00 0.00 00:07:00.310 00:07:01.245 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:01.245 Nvme0n1 : 10.00 22746.70 88.85 0.00 0.00 0.00 0.00 0.00 00:07:01.245 [2024-11-20T06:03:05.801Z] =================================================================================================================== 00:07:01.245 [2024-11-20T06:03:05.801Z] Total : 22746.70 88.85 0.00 0.00 0.00 0.00 0.00 00:07:01.245 00:07:01.245 00:07:01.245 Latency(us) 00:07:01.245 [2024-11-20T06:03:05.801Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:01.245 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:01.245 Nvme0n1 : 10.00 22747.07 88.86 0.00 0.00 5623.90 3219.81 12594.31 00:07:01.245 [2024-11-20T06:03:05.801Z] =================================================================================================================== 00:07:01.245 [2024-11-20T06:03:05.801Z] Total : 22747.07 88.86 0.00 0.00 5623.90 3219.81 12594.31 00:07:01.245 { 00:07:01.245 "results": [ 00:07:01.245 { 00:07:01.245 "job": "Nvme0n1", 00:07:01.245 "core_mask": "0x2", 00:07:01.245 "workload": "randwrite", 00:07:01.245 "status": "finished", 00:07:01.245 "queue_depth": 128, 00:07:01.245 "io_size": 4096, 00:07:01.245 "runtime": 10.002653, 00:07:01.245 "iops": 22747.065203601484, 00:07:01.245 "mibps": 88.8557234515683, 00:07:01.245 "io_failed": 0, 00:07:01.245 "io_timeout": 0, 00:07:01.245 "avg_latency_us": 5623.897971360998, 00:07:01.245 "min_latency_us": 3219.8121739130434, 00:07:01.245 "max_latency_us": 12594.30956521739 00:07:01.245 } 00:07:01.245 ], 00:07:01.245 "core_count": 1 00:07:01.245 } 00:07:01.245 07:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1042668 00:07:01.245 07:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 1042668 ']' 00:07:01.245 07:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 1042668 00:07:01.245 07:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:07:01.245 07:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:01.245 07:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1042668 00:07:01.245 07:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:01.245 07:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:01.245 07:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1042668' 00:07:01.245 killing process with pid 1042668 00:07:01.245 07:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 1042668 00:07:01.246 Received shutdown signal, test time was about 10.000000 seconds 00:07:01.246 00:07:01.246 Latency(us) 00:07:01.246 [2024-11-20T06:03:05.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:01.246 [2024-11-20T06:03:05.802Z] =================================================================================================================== 00:07:01.246 [2024-11-20T06:03:05.802Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:01.246 07:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 1042668 00:07:01.504 07:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:01.763 07:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:02.022 07:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fe69992-fdfc-46e7-9357-7537ae66779b 00:07:02.022 07:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:02.022 07:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:02.022 07:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:02.022 07:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:02.280 [2024-11-20 07:03:06.726629] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:02.280 07:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fe69992-fdfc-46e7-9357-7537ae66779b 00:07:02.280 07:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:02.280 07:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fe69992-fdfc-46e7-9357-7537ae66779b 00:07:02.280 07:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:02.280 07:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.280 07:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:02.280 07:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.280 07:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:02.280 07:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.280 07:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:02.280 07:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:02.280 07:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fe69992-fdfc-46e7-9357-7537ae66779b 00:07:02.539 request: 00:07:02.539 { 00:07:02.539 "uuid": "0fe69992-fdfc-46e7-9357-7537ae66779b", 00:07:02.539 "method": "bdev_lvol_get_lvstores", 00:07:02.539 "req_id": 1 00:07:02.539 } 00:07:02.539 Got JSON-RPC error response 00:07:02.539 response: 00:07:02.539 { 00:07:02.539 "code": -19, 00:07:02.539 "message": "No such device" 00:07:02.539 } 00:07:02.539 07:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:07:02.539 07:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:02.539 07:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:02.539 07:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:02.539 07:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:02.798 aio_bdev 00:07:02.798 07:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e14d393b-a5d7-4552-b68f-f5624a436e9f 00:07:02.798 07:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=e14d393b-a5d7-4552-b68f-f5624a436e9f 00:07:02.798 07:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:02.798 07:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:07:02.798 07:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:02.798 07:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:02.798 07:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:02.798 07:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e14d393b-a5d7-4552-b68f-f5624a436e9f -t 2000 00:07:03.057 [ 00:07:03.057 { 00:07:03.057 "name": "e14d393b-a5d7-4552-b68f-f5624a436e9f", 00:07:03.057 "aliases": [ 00:07:03.057 "lvs/lvol" 00:07:03.057 ], 00:07:03.057 "product_name": "Logical Volume", 00:07:03.057 "block_size": 4096, 00:07:03.057 "num_blocks": 38912, 00:07:03.057 "uuid": "e14d393b-a5d7-4552-b68f-f5624a436e9f", 00:07:03.057 "assigned_rate_limits": { 00:07:03.057 "rw_ios_per_sec": 0, 00:07:03.057 "rw_mbytes_per_sec": 0, 00:07:03.057 "r_mbytes_per_sec": 0, 00:07:03.057 "w_mbytes_per_sec": 0 00:07:03.057 }, 00:07:03.057 "claimed": false, 00:07:03.057 "zoned": false, 00:07:03.057 "supported_io_types": { 00:07:03.057 "read": true, 00:07:03.057 "write": true, 00:07:03.057 "unmap": true, 00:07:03.057 "flush": false, 00:07:03.057 "reset": true, 00:07:03.057 "nvme_admin": false, 00:07:03.057 "nvme_io": false, 00:07:03.057 "nvme_io_md": false, 00:07:03.057 "write_zeroes": true, 00:07:03.057 "zcopy": false, 00:07:03.057 "get_zone_info": false, 00:07:03.057 "zone_management": false, 00:07:03.057 "zone_append": false, 00:07:03.057 "compare": false, 00:07:03.057 "compare_and_write": false, 00:07:03.057 "abort": false, 00:07:03.057 "seek_hole": true, 00:07:03.057 "seek_data": true, 00:07:03.057 "copy": false, 00:07:03.057 "nvme_iov_md": false 00:07:03.057 }, 00:07:03.057 "driver_specific": { 00:07:03.057 "lvol": { 00:07:03.057 "lvol_store_uuid": "0fe69992-fdfc-46e7-9357-7537ae66779b", 00:07:03.057 "base_bdev": "aio_bdev", 00:07:03.057 "thin_provision": false, 00:07:03.057 "num_allocated_clusters": 38, 00:07:03.057 "snapshot": false, 00:07:03.057 "clone": false, 00:07:03.057 "esnap_clone": false 00:07:03.057 } 00:07:03.057 } 00:07:03.057 } 00:07:03.057 ] 00:07:03.057 07:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:07:03.057 07:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:03.057 07:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fe69992-fdfc-46e7-9357-7537ae66779b 00:07:03.315 07:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:03.316 07:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0fe69992-fdfc-46e7-9357-7537ae66779b 00:07:03.316 07:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:03.574 07:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:03.574 07:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e14d393b-a5d7-4552-b68f-f5624a436e9f 00:07:03.574 07:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0fe69992-fdfc-46e7-9357-7537ae66779b 00:07:03.832 07:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:04.090 07:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:04.090 00:07:04.090 real 0m15.627s 00:07:04.090 user 0m15.092s 00:07:04.090 sys 0m1.584s 00:07:04.090 07:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:04.090 07:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:04.090 ************************************ 00:07:04.090 END TEST lvs_grow_clean 00:07:04.090 ************************************ 00:07:04.090 07:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:04.090 07:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:04.090 07:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:04.090 07:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:04.349 ************************************ 00:07:04.349 START TEST lvs_grow_dirty 00:07:04.349 ************************************ 00:07:04.349 07:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:07:04.349 07:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:04.349 07:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:04.349 07:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:04.349 07:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:04.349 07:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:04.349 07:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:04.349 07:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:04.349 07:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:04.349 07:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:04.349 07:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:04.349 07:03:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:04.608 07:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=2bc780f0-98cb-46ff-9fa6-3e1937e919c6 00:07:04.608 07:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2bc780f0-98cb-46ff-9fa6-3e1937e919c6 00:07:04.608 07:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:04.865 07:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:04.865 07:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:04.865 07:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2bc780f0-98cb-46ff-9fa6-3e1937e919c6 lvol 150 00:07:05.123 07:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=daa5170e-5ac7-4372-8afc-539563b3d8a0 00:07:05.123 07:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:05.123 07:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:05.123 [2024-11-20 07:03:09.617847] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:05.123 [2024-11-20 07:03:09.617898] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:05.123 true 00:07:05.123 07:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2bc780f0-98cb-46ff-9fa6-3e1937e919c6 00:07:05.123 07:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:05.381 07:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:05.381 07:03:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:05.640 07:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 daa5170e-5ac7-4372-8afc-539563b3d8a0 00:07:05.898 07:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:05.898 [2024-11-20 07:03:10.388124] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:05.898 07:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:06.156 07:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1045784 00:07:06.156 07:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:06.156 07:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:06.156 07:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1045784 /var/tmp/bdevperf.sock 00:07:06.156 07:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 1045784 ']' 00:07:06.156 07:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:06.156 07:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:06.156 07:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:06.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:06.156 07:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:06.156 07:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:06.156 [2024-11-20 07:03:10.640743] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:07:06.156 [2024-11-20 07:03:10.640789] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1045784 ] 00:07:06.415 [2024-11-20 07:03:10.714265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.415 [2024-11-20 07:03:10.755017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.415 07:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:06.415 07:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:07:06.415 07:03:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:06.673 Nvme0n1 00:07:06.673 07:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:06.932 [ 00:07:06.932 { 00:07:06.932 "name": "Nvme0n1", 00:07:06.932 "aliases": [ 00:07:06.932 "daa5170e-5ac7-4372-8afc-539563b3d8a0" 00:07:06.932 ], 00:07:06.932 "product_name": "NVMe disk", 00:07:06.932 "block_size": 4096, 00:07:06.932 "num_blocks": 38912, 00:07:06.932 "uuid": "daa5170e-5ac7-4372-8afc-539563b3d8a0", 00:07:06.932 "numa_id": 1, 00:07:06.932 "assigned_rate_limits": { 00:07:06.932 "rw_ios_per_sec": 0, 00:07:06.932 "rw_mbytes_per_sec": 0, 00:07:06.932 "r_mbytes_per_sec": 0, 00:07:06.932 "w_mbytes_per_sec": 0 00:07:06.932 }, 00:07:06.932 "claimed": false, 00:07:06.932 "zoned": false, 00:07:06.932 "supported_io_types": { 00:07:06.932 "read": true, 00:07:06.932 "write": true, 00:07:06.932 "unmap": true, 00:07:06.932 "flush": true, 00:07:06.932 "reset": true, 00:07:06.932 "nvme_admin": true, 00:07:06.932 "nvme_io": true, 00:07:06.932 "nvme_io_md": false, 00:07:06.932 "write_zeroes": true, 00:07:06.932 "zcopy": false, 00:07:06.932 "get_zone_info": false, 00:07:06.932 "zone_management": false, 00:07:06.932 "zone_append": false, 00:07:06.932 "compare": true, 00:07:06.932 "compare_and_write": true, 00:07:06.932 "abort": true, 00:07:06.932 "seek_hole": false, 00:07:06.932 "seek_data": false, 00:07:06.932 "copy": true, 00:07:06.932 "nvme_iov_md": false 00:07:06.932 }, 00:07:06.932 "memory_domains": [ 00:07:06.932 { 00:07:06.932 "dma_device_id": "system", 00:07:06.932 "dma_device_type": 1 00:07:06.932 } 00:07:06.932 ], 00:07:06.932 "driver_specific": { 00:07:06.932 "nvme": [ 00:07:06.932 { 00:07:06.932 "trid": { 00:07:06.932 "trtype": "TCP", 00:07:06.932 "adrfam": "IPv4", 00:07:06.932 "traddr": "10.0.0.2", 00:07:06.932 "trsvcid": "4420", 00:07:06.932 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:06.932 }, 00:07:06.932 "ctrlr_data": { 00:07:06.932 "cntlid": 1, 00:07:06.932 "vendor_id": "0x8086", 00:07:06.932 "model_number": "SPDK bdev Controller", 00:07:06.932 "serial_number": "SPDK0", 00:07:06.932 "firmware_revision": "25.01", 00:07:06.932 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:06.932 "oacs": { 00:07:06.932 "security": 0, 00:07:06.932 "format": 0, 00:07:06.932 "firmware": 0, 00:07:06.932 "ns_manage": 0 00:07:06.932 }, 00:07:06.932 "multi_ctrlr": true, 00:07:06.932 "ana_reporting": false 00:07:06.932 }, 00:07:06.932 "vs": { 00:07:06.932 "nvme_version": "1.3" 00:07:06.932 }, 00:07:06.932 "ns_data": { 00:07:06.932 "id": 1, 00:07:06.932 "can_share": true 00:07:06.932 } 00:07:06.932 } 00:07:06.932 ], 00:07:06.932 "mp_policy": "active_passive" 00:07:06.932 } 00:07:06.932 } 00:07:06.932 ] 00:07:06.932 07:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1045967 00:07:06.932 07:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:06.932 07:03:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:06.932 Running I/O for 10 seconds... 00:07:08.309 Latency(us) 00:07:08.309 [2024-11-20T06:03:12.866Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:08.310 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:08.310 Nvme0n1 : 1.00 22567.00 88.15 0.00 0.00 0.00 0.00 0.00 00:07:08.310 [2024-11-20T06:03:12.866Z] =================================================================================================================== 00:07:08.310 [2024-11-20T06:03:12.866Z] Total : 22567.00 88.15 0.00 0.00 0.00 0.00 0.00 00:07:08.310 00:07:08.876 07:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2bc780f0-98cb-46ff-9fa6-3e1937e919c6 00:07:09.134 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:09.134 Nvme0n1 : 2.00 22738.50 88.82 0.00 0.00 0.00 0.00 0.00 00:07:09.134 [2024-11-20T06:03:13.690Z] =================================================================================================================== 00:07:09.134 [2024-11-20T06:03:13.690Z] Total : 22738.50 88.82 0.00 0.00 0.00 0.00 0.00 00:07:09.134 00:07:09.134 true 00:07:09.134 07:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2bc780f0-98cb-46ff-9fa6-3e1937e919c6 00:07:09.134 07:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:09.392 07:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:09.392 07:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:09.392 07:03:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1045967 00:07:09.959 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:09.959 Nvme0n1 : 3.00 22821.67 89.15 0.00 0.00 0.00 0.00 0.00 00:07:09.959 [2024-11-20T06:03:14.515Z] =================================================================================================================== 00:07:09.959 [2024-11-20T06:03:14.515Z] Total : 22821.67 89.15 0.00 0.00 0.00 0.00 0.00 00:07:09.959 00:07:10.895 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:10.895 Nvme0n1 : 4.00 22912.00 89.50 0.00 0.00 0.00 0.00 0.00 00:07:10.895 [2024-11-20T06:03:15.451Z] =================================================================================================================== 00:07:10.895 [2024-11-20T06:03:15.451Z] Total : 22912.00 89.50 0.00 0.00 0.00 0.00 0.00 00:07:10.895 00:07:12.270 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:12.270 Nvme0n1 : 5.00 22965.40 89.71 0.00 0.00 0.00 0.00 0.00 00:07:12.270 [2024-11-20T06:03:16.826Z] =================================================================================================================== 00:07:12.270 [2024-11-20T06:03:16.826Z] Total : 22965.40 89.71 0.00 0.00 0.00 0.00 0.00 00:07:12.270 00:07:13.206 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:13.206 Nvme0n1 : 6.00 22991.50 89.81 0.00 0.00 0.00 0.00 0.00 00:07:13.206 [2024-11-20T06:03:17.762Z] =================================================================================================================== 00:07:13.206 [2024-11-20T06:03:17.762Z] Total : 22991.50 89.81 0.00 0.00 0.00 0.00 0.00 00:07:13.206 00:07:14.143 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:14.143 Nvme0n1 : 7.00 23013.57 89.90 0.00 0.00 0.00 0.00 0.00 00:07:14.143 [2024-11-20T06:03:18.699Z] =================================================================================================================== 00:07:14.143 [2024-11-20T06:03:18.699Z] Total : 23013.57 89.90 0.00 0.00 0.00 0.00 0.00 00:07:14.143 00:07:15.238 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.238 Nvme0n1 : 8.00 23032.12 89.97 0.00 0.00 0.00 0.00 0.00 00:07:15.238 [2024-11-20T06:03:19.794Z] =================================================================================================================== 00:07:15.238 [2024-11-20T06:03:19.794Z] Total : 23032.12 89.97 0.00 0.00 0.00 0.00 0.00 00:07:15.238 00:07:16.175 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:16.175 Nvme0n1 : 9.00 23008.56 89.88 0.00 0.00 0.00 0.00 0.00 00:07:16.175 [2024-11-20T06:03:20.731Z] =================================================================================================================== 00:07:16.175 [2024-11-20T06:03:20.731Z] Total : 23008.56 89.88 0.00 0.00 0.00 0.00 0.00 00:07:16.175 00:07:17.111 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.111 Nvme0n1 : 10.00 23012.90 89.89 0.00 0.00 0.00 0.00 0.00 00:07:17.111 [2024-11-20T06:03:21.667Z] =================================================================================================================== 00:07:17.111 [2024-11-20T06:03:21.667Z] Total : 23012.90 89.89 0.00 0.00 0.00 0.00 0.00 00:07:17.111 00:07:17.111 00:07:17.111 Latency(us) 00:07:17.111 [2024-11-20T06:03:21.667Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:17.111 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.111 Nvme0n1 : 10.00 23016.47 89.91 0.00 0.00 5558.21 3362.28 14816.83 00:07:17.111 [2024-11-20T06:03:21.667Z] =================================================================================================================== 00:07:17.111 [2024-11-20T06:03:21.667Z] Total : 23016.47 89.91 0.00 0.00 5558.21 3362.28 14816.83 00:07:17.111 { 00:07:17.111 "results": [ 00:07:17.111 { 00:07:17.111 "job": "Nvme0n1", 00:07:17.111 "core_mask": "0x2", 00:07:17.111 "workload": "randwrite", 00:07:17.111 "status": "finished", 00:07:17.111 "queue_depth": 128, 00:07:17.111 "io_size": 4096, 00:07:17.111 "runtime": 10.004008, 00:07:17.111 "iops": 23016.474996821275, 00:07:17.111 "mibps": 89.9081054563331, 00:07:17.111 "io_failed": 0, 00:07:17.111 "io_timeout": 0, 00:07:17.111 "avg_latency_us": 5558.212796491482, 00:07:17.111 "min_latency_us": 3362.2817391304347, 00:07:17.111 "max_latency_us": 14816.834782608696 00:07:17.111 } 00:07:17.111 ], 00:07:17.111 "core_count": 1 00:07:17.111 } 00:07:17.111 07:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1045784 00:07:17.111 07:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 1045784 ']' 00:07:17.111 07:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 1045784 00:07:17.111 07:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:07:17.111 07:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:17.111 07:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1045784 00:07:17.111 07:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:17.111 07:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:17.111 07:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1045784' 00:07:17.111 killing process with pid 1045784 00:07:17.111 07:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 1045784 00:07:17.111 Received shutdown signal, test time was about 10.000000 seconds 00:07:17.111 00:07:17.111 Latency(us) 00:07:17.111 [2024-11-20T06:03:21.667Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:17.111 [2024-11-20T06:03:21.667Z] =================================================================================================================== 00:07:17.111 [2024-11-20T06:03:21.667Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:17.111 07:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 1045784 00:07:17.375 07:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:17.375 07:03:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:17.635 07:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2bc780f0-98cb-46ff-9fa6-3e1937e919c6 00:07:17.635 07:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:17.893 07:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:17.893 07:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:17.893 07:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1042169 00:07:17.893 07:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1042169 00:07:17.893 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1042169 Killed "${NVMF_APP[@]}" "$@" 00:07:17.893 07:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:17.893 07:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:17.893 07:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:17.893 07:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:17.893 07:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:17.893 07:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1047755 00:07:17.893 07:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1047755 00:07:17.893 07:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:17.893 07:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 1047755 ']' 00:07:17.893 07:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.893 07:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:17.893 07:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.893 07:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:17.893 07:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:17.893 [2024-11-20 07:03:22.401714] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:07:17.893 [2024-11-20 07:03:22.401775] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:18.152 [2024-11-20 07:03:22.479070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.152 [2024-11-20 07:03:22.521966] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:18.152 [2024-11-20 07:03:22.522002] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:18.152 [2024-11-20 07:03:22.522009] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:18.152 [2024-11-20 07:03:22.522015] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:18.152 [2024-11-20 07:03:22.522021] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:18.152 [2024-11-20 07:03:22.522599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.152 07:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:18.152 07:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:07:18.152 07:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:18.152 07:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:18.152 07:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:18.152 07:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:18.152 07:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:18.411 [2024-11-20 07:03:22.820844] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:18.411 [2024-11-20 07:03:22.820936] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:18.411 [2024-11-20 07:03:22.820969] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:18.411 07:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:18.411 07:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev daa5170e-5ac7-4372-8afc-539563b3d8a0 00:07:18.411 07:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=daa5170e-5ac7-4372-8afc-539563b3d8a0 00:07:18.411 07:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:18.411 07:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:07:18.411 07:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:18.411 07:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:18.411 07:03:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:18.669 07:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b daa5170e-5ac7-4372-8afc-539563b3d8a0 -t 2000 00:07:18.927 [ 00:07:18.927 { 00:07:18.927 "name": "daa5170e-5ac7-4372-8afc-539563b3d8a0", 00:07:18.927 "aliases": [ 00:07:18.927 "lvs/lvol" 00:07:18.927 ], 00:07:18.927 "product_name": "Logical Volume", 00:07:18.927 "block_size": 4096, 00:07:18.927 "num_blocks": 38912, 00:07:18.927 "uuid": "daa5170e-5ac7-4372-8afc-539563b3d8a0", 00:07:18.927 "assigned_rate_limits": { 00:07:18.928 "rw_ios_per_sec": 0, 00:07:18.928 "rw_mbytes_per_sec": 0, 00:07:18.928 "r_mbytes_per_sec": 0, 00:07:18.928 "w_mbytes_per_sec": 0 00:07:18.928 }, 00:07:18.928 "claimed": false, 00:07:18.928 "zoned": false, 00:07:18.928 "supported_io_types": { 00:07:18.928 "read": true, 00:07:18.928 "write": true, 00:07:18.928 "unmap": true, 00:07:18.928 "flush": false, 00:07:18.928 "reset": true, 00:07:18.928 "nvme_admin": false, 00:07:18.928 "nvme_io": false, 00:07:18.928 "nvme_io_md": false, 00:07:18.928 "write_zeroes": true, 00:07:18.928 "zcopy": false, 00:07:18.928 "get_zone_info": false, 00:07:18.928 "zone_management": false, 00:07:18.928 "zone_append": false, 00:07:18.928 "compare": false, 00:07:18.928 "compare_and_write": false, 00:07:18.928 "abort": false, 00:07:18.928 "seek_hole": true, 00:07:18.928 "seek_data": true, 00:07:18.928 "copy": false, 00:07:18.928 "nvme_iov_md": false 00:07:18.928 }, 00:07:18.928 "driver_specific": { 00:07:18.928 "lvol": { 00:07:18.928 "lvol_store_uuid": "2bc780f0-98cb-46ff-9fa6-3e1937e919c6", 00:07:18.928 "base_bdev": "aio_bdev", 00:07:18.928 "thin_provision": false, 00:07:18.928 "num_allocated_clusters": 38, 00:07:18.928 "snapshot": false, 00:07:18.928 "clone": false, 00:07:18.928 "esnap_clone": false 00:07:18.928 } 00:07:18.928 } 00:07:18.928 } 00:07:18.928 ] 00:07:18.928 07:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:07:18.928 07:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2bc780f0-98cb-46ff-9fa6-3e1937e919c6 00:07:18.928 07:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:18.928 07:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:18.928 07:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2bc780f0-98cb-46ff-9fa6-3e1937e919c6 00:07:18.928 07:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:19.186 07:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:19.186 07:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:19.445 [2024-11-20 07:03:23.801775] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:19.445 07:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2bc780f0-98cb-46ff-9fa6-3e1937e919c6 00:07:19.445 07:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:07:19.445 07:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2bc780f0-98cb-46ff-9fa6-3e1937e919c6 00:07:19.445 07:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:19.445 07:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.445 07:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:19.445 07:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.445 07:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:19.445 07:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.445 07:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:19.445 07:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:19.445 07:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2bc780f0-98cb-46ff-9fa6-3e1937e919c6 00:07:19.703 request: 00:07:19.703 { 00:07:19.703 "uuid": "2bc780f0-98cb-46ff-9fa6-3e1937e919c6", 00:07:19.703 "method": "bdev_lvol_get_lvstores", 00:07:19.703 "req_id": 1 00:07:19.703 } 00:07:19.703 Got JSON-RPC error response 00:07:19.703 response: 00:07:19.703 { 00:07:19.703 "code": -19, 00:07:19.703 "message": "No such device" 00:07:19.703 } 00:07:19.703 07:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:07:19.703 07:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:19.703 07:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:19.703 07:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:19.703 07:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:19.703 aio_bdev 00:07:19.703 07:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev daa5170e-5ac7-4372-8afc-539563b3d8a0 00:07:19.703 07:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=daa5170e-5ac7-4372-8afc-539563b3d8a0 00:07:19.703 07:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:19.703 07:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:07:19.703 07:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:19.703 07:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:19.703 07:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:19.961 07:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b daa5170e-5ac7-4372-8afc-539563b3d8a0 -t 2000 00:07:20.220 [ 00:07:20.220 { 00:07:20.220 "name": "daa5170e-5ac7-4372-8afc-539563b3d8a0", 00:07:20.220 "aliases": [ 00:07:20.220 "lvs/lvol" 00:07:20.220 ], 00:07:20.220 "product_name": "Logical Volume", 00:07:20.220 "block_size": 4096, 00:07:20.220 "num_blocks": 38912, 00:07:20.220 "uuid": "daa5170e-5ac7-4372-8afc-539563b3d8a0", 00:07:20.220 "assigned_rate_limits": { 00:07:20.220 "rw_ios_per_sec": 0, 00:07:20.220 "rw_mbytes_per_sec": 0, 00:07:20.220 "r_mbytes_per_sec": 0, 00:07:20.220 "w_mbytes_per_sec": 0 00:07:20.220 }, 00:07:20.220 "claimed": false, 00:07:20.220 "zoned": false, 00:07:20.220 "supported_io_types": { 00:07:20.220 "read": true, 00:07:20.220 "write": true, 00:07:20.220 "unmap": true, 00:07:20.220 "flush": false, 00:07:20.220 "reset": true, 00:07:20.220 "nvme_admin": false, 00:07:20.220 "nvme_io": false, 00:07:20.220 "nvme_io_md": false, 00:07:20.220 "write_zeroes": true, 00:07:20.220 "zcopy": false, 00:07:20.220 "get_zone_info": false, 00:07:20.220 "zone_management": false, 00:07:20.220 "zone_append": false, 00:07:20.220 "compare": false, 00:07:20.220 "compare_and_write": false, 00:07:20.220 "abort": false, 00:07:20.220 "seek_hole": true, 00:07:20.220 "seek_data": true, 00:07:20.220 "copy": false, 00:07:20.220 "nvme_iov_md": false 00:07:20.220 }, 00:07:20.220 "driver_specific": { 00:07:20.220 "lvol": { 00:07:20.220 "lvol_store_uuid": "2bc780f0-98cb-46ff-9fa6-3e1937e919c6", 00:07:20.220 "base_bdev": "aio_bdev", 00:07:20.220 "thin_provision": false, 00:07:20.220 "num_allocated_clusters": 38, 00:07:20.220 "snapshot": false, 00:07:20.220 "clone": false, 00:07:20.220 "esnap_clone": false 00:07:20.220 } 00:07:20.220 } 00:07:20.220 } 00:07:20.220 ] 00:07:20.220 07:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:07:20.220 07:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2bc780f0-98cb-46ff-9fa6-3e1937e919c6 00:07:20.220 07:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:20.479 07:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:20.479 07:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2bc780f0-98cb-46ff-9fa6-3e1937e919c6 00:07:20.479 07:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:20.479 07:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:20.479 07:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete daa5170e-5ac7-4372-8afc-539563b3d8a0 00:07:20.737 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2bc780f0-98cb-46ff-9fa6-3e1937e919c6 00:07:20.997 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:21.256 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:21.256 00:07:21.256 real 0m16.954s 00:07:21.256 user 0m43.838s 00:07:21.256 sys 0m3.706s 00:07:21.256 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:21.256 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:21.256 ************************************ 00:07:21.256 END TEST lvs_grow_dirty 00:07:21.256 ************************************ 00:07:21.256 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:21.256 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:07:21.256 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:07:21.256 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:07:21.256 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:21.256 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:07:21.256 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:07:21.256 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:07:21.256 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:21.256 nvmf_trace.0 00:07:21.256 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:07:21.256 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:21.256 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:21.256 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:21.256 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:21.256 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:21.256 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:21.256 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:21.256 rmmod nvme_tcp 00:07:21.256 rmmod nvme_fabrics 00:07:21.256 rmmod nvme_keyring 00:07:21.256 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:21.256 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:21.256 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:21.256 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1047755 ']' 00:07:21.256 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1047755 00:07:21.256 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 1047755 ']' 00:07:21.256 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 1047755 00:07:21.256 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:07:21.257 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:21.257 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1047755 00:07:21.516 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:21.516 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:21.516 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1047755' 00:07:21.516 killing process with pid 1047755 00:07:21.516 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 1047755 00:07:21.516 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 1047755 00:07:21.516 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:21.516 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:21.516 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:21.516 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:21.516 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:21.516 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:21.516 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:21.516 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:21.516 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:21.516 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:21.516 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:21.516 07:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:24.050 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:24.050 00:07:24.050 real 0m41.829s 00:07:24.050 user 1m4.575s 00:07:24.050 sys 0m10.230s 00:07:24.050 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:24.050 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:24.050 ************************************ 00:07:24.050 END TEST nvmf_lvs_grow 00:07:24.050 ************************************ 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:24.051 ************************************ 00:07:24.051 START TEST nvmf_bdev_io_wait 00:07:24.051 ************************************ 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:24.051 * Looking for test storage... 00:07:24.051 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:24.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.051 --rc genhtml_branch_coverage=1 00:07:24.051 --rc genhtml_function_coverage=1 00:07:24.051 --rc genhtml_legend=1 00:07:24.051 --rc geninfo_all_blocks=1 00:07:24.051 --rc geninfo_unexecuted_blocks=1 00:07:24.051 00:07:24.051 ' 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:24.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.051 --rc genhtml_branch_coverage=1 00:07:24.051 --rc genhtml_function_coverage=1 00:07:24.051 --rc genhtml_legend=1 00:07:24.051 --rc geninfo_all_blocks=1 00:07:24.051 --rc geninfo_unexecuted_blocks=1 00:07:24.051 00:07:24.051 ' 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:24.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.051 --rc genhtml_branch_coverage=1 00:07:24.051 --rc genhtml_function_coverage=1 00:07:24.051 --rc genhtml_legend=1 00:07:24.051 --rc geninfo_all_blocks=1 00:07:24.051 --rc geninfo_unexecuted_blocks=1 00:07:24.051 00:07:24.051 ' 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:24.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.051 --rc genhtml_branch_coverage=1 00:07:24.051 --rc genhtml_function_coverage=1 00:07:24.051 --rc genhtml_legend=1 00:07:24.051 --rc geninfo_all_blocks=1 00:07:24.051 --rc geninfo_unexecuted_blocks=1 00:07:24.051 00:07:24.051 ' 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:24.051 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:24.052 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:24.052 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:24.052 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.052 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.052 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.052 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:24.052 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.052 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:24.052 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:24.052 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:24.052 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:24.052 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:24.052 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:24.052 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:24.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:24.052 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:24.052 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:24.052 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:24.052 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:24.052 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:24.052 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:24.052 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:24.052 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:24.052 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:24.052 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:24.052 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:24.052 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:24.052 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:24.052 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:24.052 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:24.052 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:24.052 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:24.052 07:03:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:30.620 07:03:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:30.620 07:03:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:30.620 07:03:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:30.620 07:03:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:30.620 07:03:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:30.620 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:30.620 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:30.620 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:30.620 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:30.620 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:30.620 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:30.620 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:30.620 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:30.620 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:30.620 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:30.620 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:30.620 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:30.620 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:30.620 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:30.620 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:30.620 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:30.620 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:30.621 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:30.621 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:30.621 Found net devices under 0000:86:00.0: cvl_0_0 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:30.621 Found net devices under 0000:86:00.1: cvl_0_1 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:30.621 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:30.621 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.432 ms 00:07:30.621 00:07:30.621 --- 10.0.0.2 ping statistics --- 00:07:30.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.621 rtt min/avg/max/mdev = 0.432/0.432/0.432/0.000 ms 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:30.621 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:30.621 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:07:30.621 00:07:30.621 --- 10.0.0.1 ping statistics --- 00:07:30.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.621 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1051933 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1051933 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 1051933 ']' 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:30.621 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:30.622 [2024-11-20 07:03:34.339382] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:07:30.622 [2024-11-20 07:03:34.339428] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:30.622 [2024-11-20 07:03:34.422630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:30.622 [2024-11-20 07:03:34.467761] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:30.622 [2024-11-20 07:03:34.467797] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:30.622 [2024-11-20 07:03:34.467804] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:30.622 [2024-11-20 07:03:34.467810] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:30.622 [2024-11-20 07:03:34.467815] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:30.622 [2024-11-20 07:03:34.469345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.622 [2024-11-20 07:03:34.469365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.622 [2024-11-20 07:03:34.469481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.622 [2024-11-20 07:03:34.469482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:30.622 [2024-11-20 07:03:34.618114] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:30.622 Malloc0 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:30.622 [2024-11-20 07:03:34.673583] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1051962 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1051964 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:30.622 { 00:07:30.622 "params": { 00:07:30.622 "name": "Nvme$subsystem", 00:07:30.622 "trtype": "$TEST_TRANSPORT", 00:07:30.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:30.622 "adrfam": "ipv4", 00:07:30.622 "trsvcid": "$NVMF_PORT", 00:07:30.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:30.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:30.622 "hdgst": ${hdgst:-false}, 00:07:30.622 "ddgst": ${ddgst:-false} 00:07:30.622 }, 00:07:30.622 "method": "bdev_nvme_attach_controller" 00:07:30.622 } 00:07:30.622 EOF 00:07:30.622 )") 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1051966 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:30.622 { 00:07:30.622 "params": { 00:07:30.622 "name": "Nvme$subsystem", 00:07:30.622 "trtype": "$TEST_TRANSPORT", 00:07:30.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:30.622 "adrfam": "ipv4", 00:07:30.622 "trsvcid": "$NVMF_PORT", 00:07:30.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:30.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:30.622 "hdgst": ${hdgst:-false}, 00:07:30.622 "ddgst": ${ddgst:-false} 00:07:30.622 }, 00:07:30.622 "method": "bdev_nvme_attach_controller" 00:07:30.622 } 00:07:30.622 EOF 00:07:30.622 )") 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1051969 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:30.622 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:30.622 { 00:07:30.622 "params": { 00:07:30.622 "name": "Nvme$subsystem", 00:07:30.622 "trtype": "$TEST_TRANSPORT", 00:07:30.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:30.622 "adrfam": "ipv4", 00:07:30.623 "trsvcid": "$NVMF_PORT", 00:07:30.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:30.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:30.623 "hdgst": ${hdgst:-false}, 00:07:30.623 "ddgst": ${ddgst:-false} 00:07:30.623 }, 00:07:30.623 "method": "bdev_nvme_attach_controller" 00:07:30.623 } 00:07:30.623 EOF 00:07:30.623 )") 00:07:30.623 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:30.623 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:30.623 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:30.623 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:30.623 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:30.623 { 00:07:30.623 "params": { 00:07:30.623 "name": "Nvme$subsystem", 00:07:30.623 "trtype": "$TEST_TRANSPORT", 00:07:30.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:30.623 "adrfam": "ipv4", 00:07:30.623 "trsvcid": "$NVMF_PORT", 00:07:30.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:30.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:30.623 "hdgst": ${hdgst:-false}, 00:07:30.623 "ddgst": ${ddgst:-false} 00:07:30.623 }, 00:07:30.623 "method": "bdev_nvme_attach_controller" 00:07:30.623 } 00:07:30.623 EOF 00:07:30.623 )") 00:07:30.623 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:30.623 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1051962 00:07:30.623 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:30.623 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:30.623 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:30.623 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:30.623 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:30.623 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:30.623 "params": { 00:07:30.623 "name": "Nvme1", 00:07:30.623 "trtype": "tcp", 00:07:30.623 "traddr": "10.0.0.2", 00:07:30.623 "adrfam": "ipv4", 00:07:30.623 "trsvcid": "4420", 00:07:30.623 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:30.623 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:30.623 "hdgst": false, 00:07:30.623 "ddgst": false 00:07:30.623 }, 00:07:30.623 "method": "bdev_nvme_attach_controller" 00:07:30.623 }' 00:07:30.623 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:30.623 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:30.623 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:30.623 "params": { 00:07:30.623 "name": "Nvme1", 00:07:30.623 "trtype": "tcp", 00:07:30.623 "traddr": "10.0.0.2", 00:07:30.623 "adrfam": "ipv4", 00:07:30.623 "trsvcid": "4420", 00:07:30.623 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:30.623 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:30.623 "hdgst": false, 00:07:30.623 "ddgst": false 00:07:30.623 }, 00:07:30.623 "method": "bdev_nvme_attach_controller" 00:07:30.623 }' 00:07:30.623 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:30.623 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:30.623 "params": { 00:07:30.623 "name": "Nvme1", 00:07:30.623 "trtype": "tcp", 00:07:30.623 "traddr": "10.0.0.2", 00:07:30.623 "adrfam": "ipv4", 00:07:30.623 "trsvcid": "4420", 00:07:30.623 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:30.623 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:30.623 "hdgst": false, 00:07:30.623 "ddgst": false 00:07:30.623 }, 00:07:30.623 "method": "bdev_nvme_attach_controller" 00:07:30.623 }' 00:07:30.623 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:30.623 07:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:30.623 "params": { 00:07:30.623 "name": "Nvme1", 00:07:30.623 "trtype": "tcp", 00:07:30.623 "traddr": "10.0.0.2", 00:07:30.623 "adrfam": "ipv4", 00:07:30.623 "trsvcid": "4420", 00:07:30.623 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:30.623 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:30.623 "hdgst": false, 00:07:30.623 "ddgst": false 00:07:30.623 }, 00:07:30.623 "method": "bdev_nvme_attach_controller" 00:07:30.623 }' 00:07:30.623 [2024-11-20 07:03:34.723610] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:07:30.623 [2024-11-20 07:03:34.723657] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:30.623 [2024-11-20 07:03:34.723764] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:07:30.623 [2024-11-20 07:03:34.723804] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:30.623 [2024-11-20 07:03:34.724476] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:07:30.623 [2024-11-20 07:03:34.724513] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:30.623 [2024-11-20 07:03:34.728124] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:07:30.623 [2024-11-20 07:03:34.728169] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:30.623 [2024-11-20 07:03:34.904567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.623 [2024-11-20 07:03:34.947675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:30.623 [2024-11-20 07:03:35.010404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.623 [2024-11-20 07:03:35.053628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:30.623 [2024-11-20 07:03:35.103595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.623 [2024-11-20 07:03:35.151587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:30.623 [2024-11-20 07:03:35.164239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.882 [2024-11-20 07:03:35.207342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:30.882 Running I/O for 1 seconds... 00:07:30.882 Running I/O for 1 seconds... 00:07:30.882 Running I/O for 1 seconds... 00:07:30.882 Running I/O for 1 seconds... 00:07:31.818 7845.00 IOPS, 30.64 MiB/s [2024-11-20T06:03:36.374Z] 10515.00 IOPS, 41.07 MiB/s 00:07:31.818 Latency(us) 00:07:31.818 [2024-11-20T06:03:36.374Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:31.818 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:31.818 Nvme1n1 : 1.02 7843.90 30.64 0.00 0.00 16142.00 5955.23 26556.33 00:07:31.818 [2024-11-20T06:03:36.374Z] =================================================================================================================== 00:07:31.818 [2024-11-20T06:03:36.374Z] Total : 7843.90 30.64 0.00 0.00 16142.00 5955.23 26556.33 00:07:31.818 00:07:31.818 Latency(us) 00:07:31.818 [2024-11-20T06:03:36.374Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:31.818 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:31.818 Nvme1n1 : 1.01 10561.25 41.25 0.00 0.00 12069.34 6781.55 20515.62 00:07:31.818 [2024-11-20T06:03:36.374Z] =================================================================================================================== 00:07:31.818 [2024-11-20T06:03:36.374Z] Total : 10561.25 41.25 0.00 0.00 12069.34 6781.55 20515.62 00:07:32.078 8461.00 IOPS, 33.05 MiB/s 00:07:32.078 Latency(us) 00:07:32.078 [2024-11-20T06:03:36.634Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:32.078 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:32.078 Nvme1n1 : 1.00 8571.84 33.48 0.00 0.00 14903.63 2322.25 37611.97 00:07:32.078 [2024-11-20T06:03:36.634Z] =================================================================================================================== 00:07:32.078 [2024-11-20T06:03:36.634Z] Total : 8571.84 33.48 0.00 0.00 14903.63 2322.25 37611.97 00:07:32.078 246120.00 IOPS, 961.41 MiB/s 00:07:32.078 Latency(us) 00:07:32.078 [2024-11-20T06:03:36.634Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:32.078 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:32.078 Nvme1n1 : 1.00 245737.77 959.91 0.00 0.00 517.54 229.73 1538.67 00:07:32.078 [2024-11-20T06:03:36.634Z] =================================================================================================================== 00:07:32.078 [2024-11-20T06:03:36.634Z] Total : 245737.77 959.91 0.00 0.00 517.54 229.73 1538.67 00:07:32.078 07:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1051964 00:07:32.078 07:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1051966 00:07:32.078 07:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1051969 00:07:32.078 07:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:32.078 07:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.078 07:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:32.078 07:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.078 07:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:32.078 07:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:32.078 07:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:32.078 07:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:32.078 07:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:32.078 07:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:32.078 07:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:32.078 07:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:32.078 rmmod nvme_tcp 00:07:32.078 rmmod nvme_fabrics 00:07:32.078 rmmod nvme_keyring 00:07:32.078 07:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:32.078 07:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:32.078 07:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:32.078 07:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1051933 ']' 00:07:32.078 07:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1051933 00:07:32.078 07:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 1051933 ']' 00:07:32.078 07:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 1051933 00:07:32.078 07:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:07:32.078 07:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:32.078 07:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1051933 00:07:32.337 07:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:32.337 07:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:32.337 07:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1051933' 00:07:32.337 killing process with pid 1051933 00:07:32.337 07:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 1051933 00:07:32.337 07:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 1051933 00:07:32.337 07:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:32.337 07:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:32.337 07:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:32.337 07:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:32.337 07:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:32.337 07:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:32.337 07:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:32.337 07:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:32.337 07:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:32.337 07:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:32.337 07:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:32.337 07:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.871 07:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:34.871 00:07:34.871 real 0m10.774s 00:07:34.871 user 0m16.305s 00:07:34.871 sys 0m5.987s 00:07:34.871 07:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:34.871 07:03:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:34.871 ************************************ 00:07:34.871 END TEST nvmf_bdev_io_wait 00:07:34.871 ************************************ 00:07:34.871 07:03:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:34.871 07:03:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:34.871 07:03:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:34.871 07:03:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:34.871 ************************************ 00:07:34.871 START TEST nvmf_queue_depth 00:07:34.871 ************************************ 00:07:34.871 07:03:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:34.871 * Looking for test storage... 00:07:34.871 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.871 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:34.871 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:07:34.871 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:34.871 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:34.871 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:34.871 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:34.871 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:34.871 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:34.871 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:34.871 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:34.871 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:34.871 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:34.871 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:34.871 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:34.871 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:34.871 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:34.871 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:34.871 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:34.871 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:34.871 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:34.871 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:34.871 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:34.871 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:34.871 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:34.871 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:34.871 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:34.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.872 --rc genhtml_branch_coverage=1 00:07:34.872 --rc genhtml_function_coverage=1 00:07:34.872 --rc genhtml_legend=1 00:07:34.872 --rc geninfo_all_blocks=1 00:07:34.872 --rc geninfo_unexecuted_blocks=1 00:07:34.872 00:07:34.872 ' 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:34.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.872 --rc genhtml_branch_coverage=1 00:07:34.872 --rc genhtml_function_coverage=1 00:07:34.872 --rc genhtml_legend=1 00:07:34.872 --rc geninfo_all_blocks=1 00:07:34.872 --rc geninfo_unexecuted_blocks=1 00:07:34.872 00:07:34.872 ' 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:34.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.872 --rc genhtml_branch_coverage=1 00:07:34.872 --rc genhtml_function_coverage=1 00:07:34.872 --rc genhtml_legend=1 00:07:34.872 --rc geninfo_all_blocks=1 00:07:34.872 --rc geninfo_unexecuted_blocks=1 00:07:34.872 00:07:34.872 ' 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:34.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.872 --rc genhtml_branch_coverage=1 00:07:34.872 --rc genhtml_function_coverage=1 00:07:34.872 --rc genhtml_legend=1 00:07:34.872 --rc geninfo_all_blocks=1 00:07:34.872 --rc geninfo_unexecuted_blocks=1 00:07:34.872 00:07:34.872 ' 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:34.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:34.872 07:03:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:41.440 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:41.440 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:41.440 Found net devices under 0000:86:00.0: cvl_0_0 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:41.440 Found net devices under 0000:86:00.1: cvl_0_1 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:41.440 07:03:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:41.440 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:41.440 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.482 ms 00:07:41.440 00:07:41.440 --- 10.0.0.2 ping statistics --- 00:07:41.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.440 rtt min/avg/max/mdev = 0.482/0.482/0.482/0.000 ms 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:41.440 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:41.440 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:07:41.440 00:07:41.440 --- 10.0.0.1 ping statistics --- 00:07:41.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.440 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1055970 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1055970 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 1055970 ']' 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:41.440 [2024-11-20 07:03:45.151010] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:07:41.440 [2024-11-20 07:03:45.151062] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:41.440 [2024-11-20 07:03:45.232625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.440 [2024-11-20 07:03:45.273866] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:41.440 [2024-11-20 07:03:45.273902] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:41.440 [2024-11-20 07:03:45.273909] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:41.440 [2024-11-20 07:03:45.273916] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:41.440 [2024-11-20 07:03:45.273921] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:41.440 [2024-11-20 07:03:45.274495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:41.440 [2024-11-20 07:03:45.410897] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:41.440 Malloc0 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:41.440 [2024-11-20 07:03:45.461256] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1055989 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1055989 /var/tmp/bdevperf.sock 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 1055989 ']' 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:41.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:41.440 [2024-11-20 07:03:45.511727] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:07:41.440 [2024-11-20 07:03:45.511768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1055989 ] 00:07:41.440 [2024-11-20 07:03:45.586707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.440 [2024-11-20 07:03:45.629188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:41.440 NVMe0n1 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.440 07:03:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:41.440 Running I/O for 10 seconds... 00:07:43.751 11804.00 IOPS, 46.11 MiB/s [2024-11-20T06:03:49.243Z] 12079.00 IOPS, 47.18 MiB/s [2024-11-20T06:03:50.179Z] 12181.67 IOPS, 47.58 MiB/s [2024-11-20T06:03:51.119Z] 12237.75 IOPS, 47.80 MiB/s [2024-11-20T06:03:52.058Z] 12249.00 IOPS, 47.85 MiB/s [2024-11-20T06:03:52.995Z] 12263.17 IOPS, 47.90 MiB/s [2024-11-20T06:03:54.373Z] 12270.29 IOPS, 47.93 MiB/s [2024-11-20T06:03:55.310Z] 12280.38 IOPS, 47.97 MiB/s [2024-11-20T06:03:56.248Z] 12297.67 IOPS, 48.04 MiB/s [2024-11-20T06:03:56.248Z] 12290.00 IOPS, 48.01 MiB/s 00:07:51.692 Latency(us) 00:07:51.692 [2024-11-20T06:03:56.248Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:51.692 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:07:51.692 Verification LBA range: start 0x0 length 0x4000 00:07:51.692 NVMe0n1 : 10.05 12313.62 48.10 0.00 0.00 82879.10 10599.74 53568.56 00:07:51.692 [2024-11-20T06:03:56.248Z] =================================================================================================================== 00:07:51.692 [2024-11-20T06:03:56.248Z] Total : 12313.62 48.10 0.00 0.00 82879.10 10599.74 53568.56 00:07:51.692 { 00:07:51.692 "results": [ 00:07:51.692 { 00:07:51.692 "job": "NVMe0n1", 00:07:51.692 "core_mask": "0x1", 00:07:51.692 "workload": "verify", 00:07:51.692 "status": "finished", 00:07:51.692 "verify_range": { 00:07:51.692 "start": 0, 00:07:51.692 "length": 16384 00:07:51.692 }, 00:07:51.692 "queue_depth": 1024, 00:07:51.692 "io_size": 4096, 00:07:51.692 "runtime": 10.051636, 00:07:51.692 "iops": 12313.617405166682, 00:07:51.692 "mibps": 48.10006798893235, 00:07:51.692 "io_failed": 0, 00:07:51.692 "io_timeout": 0, 00:07:51.692 "avg_latency_us": 82879.10061928736, 00:07:51.692 "min_latency_us": 10599.735652173913, 00:07:51.692 "max_latency_us": 53568.556521739134 00:07:51.692 } 00:07:51.692 ], 00:07:51.692 "core_count": 1 00:07:51.692 } 00:07:51.692 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1055989 00:07:51.692 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 1055989 ']' 00:07:51.692 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 1055989 00:07:51.692 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:07:51.692 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:51.692 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1055989 00:07:51.692 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:51.692 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:51.692 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1055989' 00:07:51.692 killing process with pid 1055989 00:07:51.692 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 1055989 00:07:51.692 Received shutdown signal, test time was about 10.000000 seconds 00:07:51.692 00:07:51.692 Latency(us) 00:07:51.692 [2024-11-20T06:03:56.248Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:51.692 [2024-11-20T06:03:56.248Z] =================================================================================================================== 00:07:51.692 [2024-11-20T06:03:56.248Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:51.692 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 1055989 00:07:51.692 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:07:51.692 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:07:51.692 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:51.692 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:07:51.692 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:51.692 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:07:51.692 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:51.692 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:51.951 rmmod nvme_tcp 00:07:51.951 rmmod nvme_fabrics 00:07:51.951 rmmod nvme_keyring 00:07:51.951 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:51.951 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:07:51.951 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:07:51.951 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1055970 ']' 00:07:51.951 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1055970 00:07:51.951 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 1055970 ']' 00:07:51.951 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 1055970 00:07:51.951 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:07:51.951 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:51.951 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1055970 00:07:51.951 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:51.951 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:51.951 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1055970' 00:07:51.951 killing process with pid 1055970 00:07:51.951 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 1055970 00:07:51.951 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 1055970 00:07:52.210 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:52.210 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:52.210 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:52.210 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:07:52.210 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:07:52.210 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:52.210 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:07:52.210 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:52.211 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:52.211 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:52.211 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:52.211 07:03:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.116 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:54.116 00:07:54.116 real 0m19.638s 00:07:54.116 user 0m23.002s 00:07:54.116 sys 0m6.002s 00:07:54.116 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:54.116 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:54.116 ************************************ 00:07:54.116 END TEST nvmf_queue_depth 00:07:54.116 ************************************ 00:07:54.116 07:03:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:54.116 07:03:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:54.116 07:03:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:54.116 07:03:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:54.375 ************************************ 00:07:54.375 START TEST nvmf_target_multipath 00:07:54.375 ************************************ 00:07:54.375 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:54.375 * Looking for test storage... 00:07:54.375 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:54.375 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:54.375 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:07:54.375 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:54.375 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:54.375 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:54.375 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:54.375 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:54.375 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:07:54.375 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:07:54.375 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:07:54.375 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:07:54.375 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:07:54.375 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:07:54.375 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:07:54.375 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:54.375 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:07:54.375 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:07:54.375 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:54.375 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:54.375 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:07:54.375 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:07:54.375 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:54.375 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:07:54.375 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:07:54.375 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:07:54.375 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:07:54.375 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:54.375 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:07:54.375 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:07:54.375 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:54.375 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:54.375 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:07:54.375 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:54.375 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:54.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.375 --rc genhtml_branch_coverage=1 00:07:54.375 --rc genhtml_function_coverage=1 00:07:54.375 --rc genhtml_legend=1 00:07:54.375 --rc geninfo_all_blocks=1 00:07:54.375 --rc geninfo_unexecuted_blocks=1 00:07:54.375 00:07:54.375 ' 00:07:54.375 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:54.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.375 --rc genhtml_branch_coverage=1 00:07:54.375 --rc genhtml_function_coverage=1 00:07:54.375 --rc genhtml_legend=1 00:07:54.375 --rc geninfo_all_blocks=1 00:07:54.375 --rc geninfo_unexecuted_blocks=1 00:07:54.375 00:07:54.375 ' 00:07:54.375 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:54.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.375 --rc genhtml_branch_coverage=1 00:07:54.375 --rc genhtml_function_coverage=1 00:07:54.375 --rc genhtml_legend=1 00:07:54.375 --rc geninfo_all_blocks=1 00:07:54.375 --rc geninfo_unexecuted_blocks=1 00:07:54.375 00:07:54.375 ' 00:07:54.375 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:54.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.375 --rc genhtml_branch_coverage=1 00:07:54.375 --rc genhtml_function_coverage=1 00:07:54.375 --rc genhtml_legend=1 00:07:54.375 --rc geninfo_all_blocks=1 00:07:54.376 --rc geninfo_unexecuted_blocks=1 00:07:54.376 00:07:54.376 ' 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:54.376 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:07:54.376 07:03:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:00.950 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:00.950 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:00.950 Found net devices under 0000:86:00.0: cvl_0_0 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:00.950 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:00.950 Found net devices under 0000:86:00.1: cvl_0_1 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:00.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:00.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.403 ms 00:08:00.951 00:08:00.951 --- 10.0.0.2 ping statistics --- 00:08:00.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.951 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:00.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:00.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:08:00.951 00:08:00.951 --- 10.0.0.1 ping statistics --- 00:08:00.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.951 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:00.951 only one NIC for nvmf test 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:00.951 rmmod nvme_tcp 00:08:00.951 rmmod nvme_fabrics 00:08:00.951 rmmod nvme_keyring 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:00.951 07:04:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.860 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:02.860 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:02.860 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:02.860 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:02.860 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:02.860 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:02.860 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:02.860 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:02.860 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:02.860 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:02.860 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:02.860 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:02.860 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:02.860 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:02.860 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:02.860 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:02.860 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:02.860 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:02.860 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:02.860 07:04:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:02.860 00:08:02.860 real 0m8.348s 00:08:02.860 user 0m1.910s 00:08:02.860 sys 0m4.455s 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:02.860 ************************************ 00:08:02.860 END TEST nvmf_target_multipath 00:08:02.860 ************************************ 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:02.860 ************************************ 00:08:02.860 START TEST nvmf_zcopy 00:08:02.860 ************************************ 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:02.860 * Looking for test storage... 00:08:02.860 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:02.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.860 --rc genhtml_branch_coverage=1 00:08:02.860 --rc genhtml_function_coverage=1 00:08:02.860 --rc genhtml_legend=1 00:08:02.860 --rc geninfo_all_blocks=1 00:08:02.860 --rc geninfo_unexecuted_blocks=1 00:08:02.860 00:08:02.860 ' 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:02.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.860 --rc genhtml_branch_coverage=1 00:08:02.860 --rc genhtml_function_coverage=1 00:08:02.860 --rc genhtml_legend=1 00:08:02.860 --rc geninfo_all_blocks=1 00:08:02.860 --rc geninfo_unexecuted_blocks=1 00:08:02.860 00:08:02.860 ' 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:02.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.860 --rc genhtml_branch_coverage=1 00:08:02.860 --rc genhtml_function_coverage=1 00:08:02.860 --rc genhtml_legend=1 00:08:02.860 --rc geninfo_all_blocks=1 00:08:02.860 --rc geninfo_unexecuted_blocks=1 00:08:02.860 00:08:02.860 ' 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:02.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.860 --rc genhtml_branch_coverage=1 00:08:02.860 --rc genhtml_function_coverage=1 00:08:02.860 --rc genhtml_legend=1 00:08:02.860 --rc geninfo_all_blocks=1 00:08:02.860 --rc geninfo_unexecuted_blocks=1 00:08:02.860 00:08:02.860 ' 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:02.860 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:02.861 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:02.861 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:02.861 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:02.861 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:02.861 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:02.861 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.861 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.861 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.861 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.861 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.861 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.861 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:02.861 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.861 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:02.861 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:02.861 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:02.861 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:02.861 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:02.861 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:02.861 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:02.861 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:02.861 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:02.861 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:02.861 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:02.861 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:02.861 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:02.861 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:02.861 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:02.861 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:02.861 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:02.861 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.861 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:02.861 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:02.861 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:02.861 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:02.861 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:02.861 07:04:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:09.432 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:09.432 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:09.432 Found net devices under 0000:86:00.0: cvl_0_0 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:09.432 Found net devices under 0000:86:00.1: cvl_0_1 00:08:09.432 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.433 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:09.433 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:09.433 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:09.433 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:09.433 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:09.433 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:09.433 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:09.433 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:09.433 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:09.433 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:09.433 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:09.433 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:09.433 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:09.433 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:09.433 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:09.433 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:09.433 07:04:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:09.433 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:09.433 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:09.433 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:09.433 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:09.433 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:09.433 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:09.433 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:09.433 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:09.433 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:09.433 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:09.433 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:09.433 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:09.433 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.380 ms 00:08:09.433 00:08:09.433 --- 10.0.0.2 ping statistics --- 00:08:09.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.433 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:08:09.433 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:09.433 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:09.433 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:08:09.433 00:08:09.433 --- 10.0.0.1 ping statistics --- 00:08:09.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.433 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:08:09.433 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:09.433 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:09.433 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:09.433 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:09.433 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:09.433 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:09.433 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:09.433 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:09.433 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:09.433 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:09.433 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:09.433 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:09.433 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:09.433 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1064891 00:08:09.433 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1064891 00:08:09.433 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:09.433 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 1064891 ']' 00:08:09.433 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.433 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:09.433 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.433 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:09.433 07:04:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:09.433 [2024-11-20 07:04:13.333020] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:08:09.433 [2024-11-20 07:04:13.333070] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.433 [2024-11-20 07:04:13.413770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.433 [2024-11-20 07:04:13.456104] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:09.433 [2024-11-20 07:04:13.456135] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:09.433 [2024-11-20 07:04:13.456142] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:09.433 [2024-11-20 07:04:13.456149] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:09.433 [2024-11-20 07:04:13.456154] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:09.433 [2024-11-20 07:04:13.456598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.693 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:09.693 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:08:09.693 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:09.693 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:09.693 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:09.693 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:09.693 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:09.693 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:09.693 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.693 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:09.693 [2024-11-20 07:04:14.210433] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:09.693 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.693 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:09.693 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.693 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:09.693 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.693 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:09.693 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.693 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:09.693 [2024-11-20 07:04:14.226601] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:09.693 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.693 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:09.693 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.693 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:09.693 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.693 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:09.693 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.693 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:09.952 malloc0 00:08:09.952 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.952 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:09.952 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.952 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:09.952 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.953 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:09.953 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:09.953 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:09.953 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:09.953 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:09.953 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:09.953 { 00:08:09.953 "params": { 00:08:09.953 "name": "Nvme$subsystem", 00:08:09.953 "trtype": "$TEST_TRANSPORT", 00:08:09.953 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:09.953 "adrfam": "ipv4", 00:08:09.953 "trsvcid": "$NVMF_PORT", 00:08:09.953 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:09.953 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:09.953 "hdgst": ${hdgst:-false}, 00:08:09.953 "ddgst": ${ddgst:-false} 00:08:09.953 }, 00:08:09.953 "method": "bdev_nvme_attach_controller" 00:08:09.953 } 00:08:09.953 EOF 00:08:09.953 )") 00:08:09.953 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:09.953 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:09.953 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:09.953 07:04:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:09.953 "params": { 00:08:09.953 "name": "Nvme1", 00:08:09.953 "trtype": "tcp", 00:08:09.953 "traddr": "10.0.0.2", 00:08:09.953 "adrfam": "ipv4", 00:08:09.953 "trsvcid": "4420", 00:08:09.953 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:09.953 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:09.953 "hdgst": false, 00:08:09.953 "ddgst": false 00:08:09.953 }, 00:08:09.953 "method": "bdev_nvme_attach_controller" 00:08:09.953 }' 00:08:09.953 [2024-11-20 07:04:14.305811] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:08:09.953 [2024-11-20 07:04:14.305854] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1064944 ] 00:08:09.953 [2024-11-20 07:04:14.379058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.953 [2024-11-20 07:04:14.420273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.211 Running I/O for 10 seconds... 00:08:12.575 8477.00 IOPS, 66.23 MiB/s [2024-11-20T06:04:17.769Z] 8513.50 IOPS, 66.51 MiB/s [2024-11-20T06:04:19.147Z] 8524.00 IOPS, 66.59 MiB/s [2024-11-20T06:04:20.081Z] 8533.75 IOPS, 66.67 MiB/s [2024-11-20T06:04:21.015Z] 8547.00 IOPS, 66.77 MiB/s [2024-11-20T06:04:21.952Z] 8524.50 IOPS, 66.60 MiB/s [2024-11-20T06:04:22.889Z] 8533.43 IOPS, 66.67 MiB/s [2024-11-20T06:04:23.827Z] 8538.50 IOPS, 66.71 MiB/s [2024-11-20T06:04:24.765Z] 8546.33 IOPS, 66.77 MiB/s [2024-11-20T06:04:24.765Z] 8552.10 IOPS, 66.81 MiB/s 00:08:20.209 Latency(us) 00:08:20.209 [2024-11-20T06:04:24.765Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.209 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:20.209 Verification LBA range: start 0x0 length 0x1000 00:08:20.209 Nvme1n1 : 10.01 8553.97 66.83 0.00 0.00 14921.19 1666.89 22111.28 00:08:20.209 [2024-11-20T06:04:24.765Z] =================================================================================================================== 00:08:20.209 [2024-11-20T06:04:24.765Z] Total : 8553.97 66.83 0.00 0.00 14921.19 1666.89 22111.28 00:08:20.468 07:04:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1066770 00:08:20.468 07:04:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:20.468 07:04:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:20.468 07:04:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:20.468 07:04:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:20.468 07:04:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:20.468 07:04:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:20.468 07:04:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:20.468 07:04:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:20.468 { 00:08:20.468 "params": { 00:08:20.468 "name": "Nvme$subsystem", 00:08:20.468 "trtype": "$TEST_TRANSPORT", 00:08:20.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:20.468 "adrfam": "ipv4", 00:08:20.468 "trsvcid": "$NVMF_PORT", 00:08:20.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:20.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:20.468 "hdgst": ${hdgst:-false}, 00:08:20.468 "ddgst": ${ddgst:-false} 00:08:20.468 }, 00:08:20.468 "method": "bdev_nvme_attach_controller" 00:08:20.468 } 00:08:20.468 EOF 00:08:20.468 )") 00:08:20.468 07:04:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:20.468 [2024-11-20 07:04:24.908103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.468 [2024-11-20 07:04:24.908140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.468 07:04:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:20.468 07:04:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:20.468 07:04:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:20.468 "params": { 00:08:20.468 "name": "Nvme1", 00:08:20.468 "trtype": "tcp", 00:08:20.468 "traddr": "10.0.0.2", 00:08:20.468 "adrfam": "ipv4", 00:08:20.468 "trsvcid": "4420", 00:08:20.468 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:20.468 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:20.468 "hdgst": false, 00:08:20.468 "ddgst": false 00:08:20.468 }, 00:08:20.468 "method": "bdev_nvme_attach_controller" 00:08:20.469 }' 00:08:20.469 [2024-11-20 07:04:24.920100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.469 [2024-11-20 07:04:24.920113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.469 [2024-11-20 07:04:24.932126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.469 [2024-11-20 07:04:24.932137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.469 [2024-11-20 07:04:24.944155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.469 [2024-11-20 07:04:24.944166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.469 [2024-11-20 07:04:24.947125] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:08:20.469 [2024-11-20 07:04:24.947166] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1066770 ] 00:08:20.469 [2024-11-20 07:04:24.956191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.469 [2024-11-20 07:04:24.956201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.469 [2024-11-20 07:04:24.968231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.469 [2024-11-20 07:04:24.968241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.469 [2024-11-20 07:04:24.980256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.469 [2024-11-20 07:04:24.980267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.469 [2024-11-20 07:04:24.992287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.469 [2024-11-20 07:04:24.992297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.469 [2024-11-20 07:04:25.004317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.469 [2024-11-20 07:04:25.004327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.469 [2024-11-20 07:04:25.016350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.469 [2024-11-20 07:04:25.016360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.729 [2024-11-20 07:04:25.021028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.729 [2024-11-20 07:04:25.028383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.729 [2024-11-20 07:04:25.028394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.729 [2024-11-20 07:04:25.040417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.729 [2024-11-20 07:04:25.040432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.729 [2024-11-20 07:04:25.052446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.729 [2024-11-20 07:04:25.052457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.729 [2024-11-20 07:04:25.062824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.729 [2024-11-20 07:04:25.064481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.729 [2024-11-20 07:04:25.064493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.729 [2024-11-20 07:04:25.076525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.729 [2024-11-20 07:04:25.076542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.729 [2024-11-20 07:04:25.088551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.729 [2024-11-20 07:04:25.088572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.729 [2024-11-20 07:04:25.100578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.729 [2024-11-20 07:04:25.100594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.729 [2024-11-20 07:04:25.112608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.729 [2024-11-20 07:04:25.112622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.729 [2024-11-20 07:04:25.124642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.729 [2024-11-20 07:04:25.124657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.729 [2024-11-20 07:04:25.136670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.729 [2024-11-20 07:04:25.136683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.729 [2024-11-20 07:04:25.148700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.729 [2024-11-20 07:04:25.148710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.729 [2024-11-20 07:04:25.160763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.729 [2024-11-20 07:04:25.160785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.729 [2024-11-20 07:04:25.172779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.729 [2024-11-20 07:04:25.172794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.729 [2024-11-20 07:04:25.184810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.729 [2024-11-20 07:04:25.184826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.729 [2024-11-20 07:04:25.196846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.729 [2024-11-20 07:04:25.196860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.729 [2024-11-20 07:04:25.208874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.729 [2024-11-20 07:04:25.208886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.729 [2024-11-20 07:04:25.220911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.729 [2024-11-20 07:04:25.220929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.729 Running I/O for 5 seconds... 00:08:20.729 [2024-11-20 07:04:25.232938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.729 [2024-11-20 07:04:25.232951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.729 [2024-11-20 07:04:25.248454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.729 [2024-11-20 07:04:25.248474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.729 [2024-11-20 07:04:25.262778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.729 [2024-11-20 07:04:25.262798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.729 [2024-11-20 07:04:25.277155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.729 [2024-11-20 07:04:25.277174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.989 [2024-11-20 07:04:25.286508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.989 [2024-11-20 07:04:25.286527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.989 [2024-11-20 07:04:25.296170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.989 [2024-11-20 07:04:25.296188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.989 [2024-11-20 07:04:25.304990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.989 [2024-11-20 07:04:25.305008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.989 [2024-11-20 07:04:25.314352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.989 [2024-11-20 07:04:25.314370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.989 [2024-11-20 07:04:25.329161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.989 [2024-11-20 07:04:25.329179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.989 [2024-11-20 07:04:25.338078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.989 [2024-11-20 07:04:25.338096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.989 [2024-11-20 07:04:25.347899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.989 [2024-11-20 07:04:25.347918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.989 [2024-11-20 07:04:25.356754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.989 [2024-11-20 07:04:25.356773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.989 [2024-11-20 07:04:25.366196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.989 [2024-11-20 07:04:25.366215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.989 [2024-11-20 07:04:25.380536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.989 [2024-11-20 07:04:25.380556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.989 [2024-11-20 07:04:25.394336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.989 [2024-11-20 07:04:25.394355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.989 [2024-11-20 07:04:25.403236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.989 [2024-11-20 07:04:25.403255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.989 [2024-11-20 07:04:25.412419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.989 [2024-11-20 07:04:25.412438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.989 [2024-11-20 07:04:25.421808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.989 [2024-11-20 07:04:25.421827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.989 [2024-11-20 07:04:25.436612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.989 [2024-11-20 07:04:25.436631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.989 [2024-11-20 07:04:25.450095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.989 [2024-11-20 07:04:25.450115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.989 [2024-11-20 07:04:25.459195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.989 [2024-11-20 07:04:25.459213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.989 [2024-11-20 07:04:25.468049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.989 [2024-11-20 07:04:25.468067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.989 [2024-11-20 07:04:25.477368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.989 [2024-11-20 07:04:25.477387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.989 [2024-11-20 07:04:25.492468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.989 [2024-11-20 07:04:25.492492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.989 [2024-11-20 07:04:25.502872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.989 [2024-11-20 07:04:25.502892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.989 [2024-11-20 07:04:25.517501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.989 [2024-11-20 07:04:25.517520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:20.989 [2024-11-20 07:04:25.532312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:20.989 [2024-11-20 07:04:25.532331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.249 [2024-11-20 07:04:25.547595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.249 [2024-11-20 07:04:25.547615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.249 [2024-11-20 07:04:25.562499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.249 [2024-11-20 07:04:25.562518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.249 [2024-11-20 07:04:25.578304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.249 [2024-11-20 07:04:25.578324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.249 [2024-11-20 07:04:25.587178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.249 [2024-11-20 07:04:25.587197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.249 [2024-11-20 07:04:25.601732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.249 [2024-11-20 07:04:25.601752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.249 [2024-11-20 07:04:25.615621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.249 [2024-11-20 07:04:25.615640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.249 [2024-11-20 07:04:25.629716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.249 [2024-11-20 07:04:25.629736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.249 [2024-11-20 07:04:25.643539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.249 [2024-11-20 07:04:25.643558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.249 [2024-11-20 07:04:25.658069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.250 [2024-11-20 07:04:25.658088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.250 [2024-11-20 07:04:25.673488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.250 [2024-11-20 07:04:25.673517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.250 [2024-11-20 07:04:25.687898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.250 [2024-11-20 07:04:25.687916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.250 [2024-11-20 07:04:25.702099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.250 [2024-11-20 07:04:25.702119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.250 [2024-11-20 07:04:25.716167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.250 [2024-11-20 07:04:25.716186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.250 [2024-11-20 07:04:25.730314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.250 [2024-11-20 07:04:25.730333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.250 [2024-11-20 07:04:25.744605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.250 [2024-11-20 07:04:25.744625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.250 [2024-11-20 07:04:25.758650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.250 [2024-11-20 07:04:25.758673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.250 [2024-11-20 07:04:25.773058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.250 [2024-11-20 07:04:25.773078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.250 [2024-11-20 07:04:25.784327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.250 [2024-11-20 07:04:25.784345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.250 [2024-11-20 07:04:25.793805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.250 [2024-11-20 07:04:25.793824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.510 [2024-11-20 07:04:25.808725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.510 [2024-11-20 07:04:25.808744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.510 [2024-11-20 07:04:25.819530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.510 [2024-11-20 07:04:25.819549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.510 [2024-11-20 07:04:25.834292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.510 [2024-11-20 07:04:25.834311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.510 [2024-11-20 07:04:25.845184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.510 [2024-11-20 07:04:25.845203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.510 [2024-11-20 07:04:25.859378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.510 [2024-11-20 07:04:25.859396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.510 [2024-11-20 07:04:25.873538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.510 [2024-11-20 07:04:25.873557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.510 [2024-11-20 07:04:25.885242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.510 [2024-11-20 07:04:25.885260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.510 [2024-11-20 07:04:25.900015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.510 [2024-11-20 07:04:25.900034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.510 [2024-11-20 07:04:25.911195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.510 [2024-11-20 07:04:25.911223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.510 [2024-11-20 07:04:25.925500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.510 [2024-11-20 07:04:25.925519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.510 [2024-11-20 07:04:25.939634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.510 [2024-11-20 07:04:25.939654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.510 [2024-11-20 07:04:25.949245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.510 [2024-11-20 07:04:25.949263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.510 [2024-11-20 07:04:25.963676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.510 [2024-11-20 07:04:25.963695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.510 [2024-11-20 07:04:25.977696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.510 [2024-11-20 07:04:25.977715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.510 [2024-11-20 07:04:25.992156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.510 [2024-11-20 07:04:25.992174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.510 [2024-11-20 07:04:26.007934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.510 [2024-11-20 07:04:26.007963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.510 [2024-11-20 07:04:26.022166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.510 [2024-11-20 07:04:26.022185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.510 [2024-11-20 07:04:26.036231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.510 [2024-11-20 07:04:26.036250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.510 [2024-11-20 07:04:26.050672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.510 [2024-11-20 07:04:26.050690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.770 [2024-11-20 07:04:26.066621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.770 [2024-11-20 07:04:26.066641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.770 [2024-11-20 07:04:26.080707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.771 [2024-11-20 07:04:26.080725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.771 [2024-11-20 07:04:26.094646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.771 [2024-11-20 07:04:26.094664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.771 [2024-11-20 07:04:26.108912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.771 [2024-11-20 07:04:26.108931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.771 [2024-11-20 07:04:26.123267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.771 [2024-11-20 07:04:26.123286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.771 [2024-11-20 07:04:26.137041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.771 [2024-11-20 07:04:26.137060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.771 [2024-11-20 07:04:26.151691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.771 [2024-11-20 07:04:26.151708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.771 [2024-11-20 07:04:26.167308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.771 [2024-11-20 07:04:26.167326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.771 [2024-11-20 07:04:26.181658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.771 [2024-11-20 07:04:26.181676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.771 [2024-11-20 07:04:26.196030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.771 [2024-11-20 07:04:26.196049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.771 [2024-11-20 07:04:26.209936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.771 [2024-11-20 07:04:26.209962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.771 [2024-11-20 07:04:26.223808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.771 [2024-11-20 07:04:26.223828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.771 16380.00 IOPS, 127.97 MiB/s [2024-11-20T06:04:26.327Z] [2024-11-20 07:04:26.237338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.771 [2024-11-20 07:04:26.237356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.771 [2024-11-20 07:04:26.250944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.771 [2024-11-20 07:04:26.250971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.771 [2024-11-20 07:04:26.265517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.771 [2024-11-20 07:04:26.265536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.771 [2024-11-20 07:04:26.280688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.771 [2024-11-20 07:04:26.280710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.771 [2024-11-20 07:04:26.295086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.771 [2024-11-20 07:04:26.295105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:21.771 [2024-11-20 07:04:26.309223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:21.771 [2024-11-20 07:04:26.309242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.030 [2024-11-20 07:04:26.323104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.030 [2024-11-20 07:04:26.323123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.030 [2024-11-20 07:04:26.337181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.030 [2024-11-20 07:04:26.337211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.030 [2024-11-20 07:04:26.351073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.030 [2024-11-20 07:04:26.351093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.030 [2024-11-20 07:04:26.364929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.030 [2024-11-20 07:04:26.364953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.031 [2024-11-20 07:04:26.374592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.031 [2024-11-20 07:04:26.374610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.031 [2024-11-20 07:04:26.389159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.031 [2024-11-20 07:04:26.389179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.031 [2024-11-20 07:04:26.403050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.031 [2024-11-20 07:04:26.403070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.031 [2024-11-20 07:04:26.416893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.031 [2024-11-20 07:04:26.416911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.031 [2024-11-20 07:04:26.431954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.031 [2024-11-20 07:04:26.431989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.031 [2024-11-20 07:04:26.447342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.031 [2024-11-20 07:04:26.447360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.031 [2024-11-20 07:04:26.462027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.031 [2024-11-20 07:04:26.462051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.031 [2024-11-20 07:04:26.477594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.031 [2024-11-20 07:04:26.477615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.031 [2024-11-20 07:04:26.491693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.031 [2024-11-20 07:04:26.491712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.031 [2024-11-20 07:04:26.501318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.031 [2024-11-20 07:04:26.501337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.031 [2024-11-20 07:04:26.515879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.031 [2024-11-20 07:04:26.515898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.031 [2024-11-20 07:04:26.527110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.031 [2024-11-20 07:04:26.527129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.031 [2024-11-20 07:04:26.541649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.031 [2024-11-20 07:04:26.541668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.031 [2024-11-20 07:04:26.555735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.031 [2024-11-20 07:04:26.555754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.031 [2024-11-20 07:04:26.569761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.031 [2024-11-20 07:04:26.569780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.290 [2024-11-20 07:04:26.584215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.290 [2024-11-20 07:04:26.584233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.290 [2024-11-20 07:04:26.598223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.290 [2024-11-20 07:04:26.598243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.290 [2024-11-20 07:04:26.612399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.290 [2024-11-20 07:04:26.612418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.290 [2024-11-20 07:04:26.627001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.290 [2024-11-20 07:04:26.627020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.290 [2024-11-20 07:04:26.638479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.290 [2024-11-20 07:04:26.638497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.290 [2024-11-20 07:04:26.652684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.290 [2024-11-20 07:04:26.652703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.290 [2024-11-20 07:04:26.666346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.290 [2024-11-20 07:04:26.666365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.290 [2024-11-20 07:04:26.680496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.290 [2024-11-20 07:04:26.680515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.290 [2024-11-20 07:04:26.691247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.290 [2024-11-20 07:04:26.691266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.290 [2024-11-20 07:04:26.705686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.291 [2024-11-20 07:04:26.705705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.291 [2024-11-20 07:04:26.717058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.291 [2024-11-20 07:04:26.717077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.291 [2024-11-20 07:04:26.731936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.291 [2024-11-20 07:04:26.731962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.291 [2024-11-20 07:04:26.747105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.291 [2024-11-20 07:04:26.747124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.291 [2024-11-20 07:04:26.761174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.291 [2024-11-20 07:04:26.761194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.291 [2024-11-20 07:04:26.775524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.291 [2024-11-20 07:04:26.775548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.291 [2024-11-20 07:04:26.787226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.291 [2024-11-20 07:04:26.787245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.291 [2024-11-20 07:04:26.801677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.291 [2024-11-20 07:04:26.801697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.291 [2024-11-20 07:04:26.816051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.291 [2024-11-20 07:04:26.816071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.291 [2024-11-20 07:04:26.826666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.291 [2024-11-20 07:04:26.826685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.291 [2024-11-20 07:04:26.836380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.291 [2024-11-20 07:04:26.836399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.550 [2024-11-20 07:04:26.850708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.550 [2024-11-20 07:04:26.850727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.550 [2024-11-20 07:04:26.864474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.551 [2024-11-20 07:04:26.864493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.551 [2024-11-20 07:04:26.878825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.551 [2024-11-20 07:04:26.878845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.551 [2024-11-20 07:04:26.893128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.551 [2024-11-20 07:04:26.893148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.551 [2024-11-20 07:04:26.906933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.551 [2024-11-20 07:04:26.906960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.551 [2024-11-20 07:04:26.920871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.551 [2024-11-20 07:04:26.920891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.551 [2024-11-20 07:04:26.934822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.551 [2024-11-20 07:04:26.934841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.551 [2024-11-20 07:04:26.948910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.551 [2024-11-20 07:04:26.948930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.551 [2024-11-20 07:04:26.958109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.551 [2024-11-20 07:04:26.958129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.551 [2024-11-20 07:04:26.967630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.551 [2024-11-20 07:04:26.967649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.551 [2024-11-20 07:04:26.976995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.551 [2024-11-20 07:04:26.977014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.551 [2024-11-20 07:04:26.991811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.551 [2024-11-20 07:04:26.991830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.551 [2024-11-20 07:04:27.005640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.551 [2024-11-20 07:04:27.005659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.551 [2024-11-20 07:04:27.019727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.551 [2024-11-20 07:04:27.019747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.551 [2024-11-20 07:04:27.034261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.551 [2024-11-20 07:04:27.034281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.551 [2024-11-20 07:04:27.045210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.551 [2024-11-20 07:04:27.045230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.551 [2024-11-20 07:04:27.059674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.551 [2024-11-20 07:04:27.059694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.551 [2024-11-20 07:04:27.073114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.551 [2024-11-20 07:04:27.073133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.551 [2024-11-20 07:04:27.082638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.551 [2024-11-20 07:04:27.082657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.551 [2024-11-20 07:04:27.092302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.551 [2024-11-20 07:04:27.092320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.811 [2024-11-20 07:04:27.107167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.811 [2024-11-20 07:04:27.107186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.811 [2024-11-20 07:04:27.118287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.811 [2024-11-20 07:04:27.118306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.811 [2024-11-20 07:04:27.127716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.811 [2024-11-20 07:04:27.127735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.811 [2024-11-20 07:04:27.142470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.811 [2024-11-20 07:04:27.142489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.811 [2024-11-20 07:04:27.153022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.811 [2024-11-20 07:04:27.153042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.811 [2024-11-20 07:04:27.167467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.811 [2024-11-20 07:04:27.167486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.811 [2024-11-20 07:04:27.181493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.811 [2024-11-20 07:04:27.181513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.811 [2024-11-20 07:04:27.195231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.811 [2024-11-20 07:04:27.195249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.811 [2024-11-20 07:04:27.209723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.811 [2024-11-20 07:04:27.209741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.811 [2024-11-20 07:04:27.223440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.811 [2024-11-20 07:04:27.223460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.811 16417.00 IOPS, 128.26 MiB/s [2024-11-20T06:04:27.367Z] [2024-11-20 07:04:27.237681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.811 [2024-11-20 07:04:27.237700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.811 [2024-11-20 07:04:27.251877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.811 [2024-11-20 07:04:27.251896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.811 [2024-11-20 07:04:27.265756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.811 [2024-11-20 07:04:27.265775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.811 [2024-11-20 07:04:27.279866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.811 [2024-11-20 07:04:27.279889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.811 [2024-11-20 07:04:27.289286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.811 [2024-11-20 07:04:27.289305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.811 [2024-11-20 07:04:27.303804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.811 [2024-11-20 07:04:27.303823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.811 [2024-11-20 07:04:27.317624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.811 [2024-11-20 07:04:27.317643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.811 [2024-11-20 07:04:27.331908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.811 [2024-11-20 07:04:27.331927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.811 [2024-11-20 07:04:27.346177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.811 [2024-11-20 07:04:27.346195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:22.811 [2024-11-20 07:04:27.357729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:22.811 [2024-11-20 07:04:27.357748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.070 [2024-11-20 07:04:27.372060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.070 [2024-11-20 07:04:27.372078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.070 [2024-11-20 07:04:27.386372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.071 [2024-11-20 07:04:27.386391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.071 [2024-11-20 07:04:27.400304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.071 [2024-11-20 07:04:27.400324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.071 [2024-11-20 07:04:27.414702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.071 [2024-11-20 07:04:27.414722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.071 [2024-11-20 07:04:27.430311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.071 [2024-11-20 07:04:27.430329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.071 [2024-11-20 07:04:27.444675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.071 [2024-11-20 07:04:27.444693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.071 [2024-11-20 07:04:27.455758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.071 [2024-11-20 07:04:27.455777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.071 [2024-11-20 07:04:27.470083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.071 [2024-11-20 07:04:27.470102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.071 [2024-11-20 07:04:27.484135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.071 [2024-11-20 07:04:27.484153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.071 [2024-11-20 07:04:27.494935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.071 [2024-11-20 07:04:27.494961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.071 [2024-11-20 07:04:27.509171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.071 [2024-11-20 07:04:27.509190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.071 [2024-11-20 07:04:27.523234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.071 [2024-11-20 07:04:27.523252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.071 [2024-11-20 07:04:27.536594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.071 [2024-11-20 07:04:27.536618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.071 [2024-11-20 07:04:27.550880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.071 [2024-11-20 07:04:27.550899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.071 [2024-11-20 07:04:27.564756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.071 [2024-11-20 07:04:27.564774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.071 [2024-11-20 07:04:27.579071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.071 [2024-11-20 07:04:27.579090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.071 [2024-11-20 07:04:27.592957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.071 [2024-11-20 07:04:27.592975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.071 [2024-11-20 07:04:27.606851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.071 [2024-11-20 07:04:27.606870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.071 [2024-11-20 07:04:27.620921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.071 [2024-11-20 07:04:27.620940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.330 [2024-11-20 07:04:27.634478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.330 [2024-11-20 07:04:27.634496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.330 [2024-11-20 07:04:27.648697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.330 [2024-11-20 07:04:27.648715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.330 [2024-11-20 07:04:27.662682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.330 [2024-11-20 07:04:27.662700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.330 [2024-11-20 07:04:27.677177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.330 [2024-11-20 07:04:27.677196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.330 [2024-11-20 07:04:27.688478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.330 [2024-11-20 07:04:27.688496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.330 [2024-11-20 07:04:27.702924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.330 [2024-11-20 07:04:27.702944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.330 [2024-11-20 07:04:27.717531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.330 [2024-11-20 07:04:27.717549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.330 [2024-11-20 07:04:27.733168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.330 [2024-11-20 07:04:27.733189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.330 [2024-11-20 07:04:27.747436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.330 [2024-11-20 07:04:27.747456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.331 [2024-11-20 07:04:27.761646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.331 [2024-11-20 07:04:27.761665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.331 [2024-11-20 07:04:27.771395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.331 [2024-11-20 07:04:27.771413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.331 [2024-11-20 07:04:27.785929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.331 [2024-11-20 07:04:27.785954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.331 [2024-11-20 07:04:27.800000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.331 [2024-11-20 07:04:27.800023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.331 [2024-11-20 07:04:27.814367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.331 [2024-11-20 07:04:27.814386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.331 [2024-11-20 07:04:27.825309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.331 [2024-11-20 07:04:27.825328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.331 [2024-11-20 07:04:27.839813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.331 [2024-11-20 07:04:27.839831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.331 [2024-11-20 07:04:27.853767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.331 [2024-11-20 07:04:27.853786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.331 [2024-11-20 07:04:27.868150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.331 [2024-11-20 07:04:27.868169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.331 [2024-11-20 07:04:27.879235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.331 [2024-11-20 07:04:27.879254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.590 [2024-11-20 07:04:27.893796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.590 [2024-11-20 07:04:27.893815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.590 [2024-11-20 07:04:27.907997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.590 [2024-11-20 07:04:27.908016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.590 [2024-11-20 07:04:27.921534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.590 [2024-11-20 07:04:27.921552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.590 [2024-11-20 07:04:27.935488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.590 [2024-11-20 07:04:27.935506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.590 [2024-11-20 07:04:27.950180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.590 [2024-11-20 07:04:27.950199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.590 [2024-11-20 07:04:27.965324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.591 [2024-11-20 07:04:27.965342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.591 [2024-11-20 07:04:27.979663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.591 [2024-11-20 07:04:27.979682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.591 [2024-11-20 07:04:27.993908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.591 [2024-11-20 07:04:27.993927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.591 [2024-11-20 07:04:28.004404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.591 [2024-11-20 07:04:28.004423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.591 [2024-11-20 07:04:28.019255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.591 [2024-11-20 07:04:28.019273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.591 [2024-11-20 07:04:28.033426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.591 [2024-11-20 07:04:28.033445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.591 [2024-11-20 07:04:28.046970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.591 [2024-11-20 07:04:28.046988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.591 [2024-11-20 07:04:28.060903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.591 [2024-11-20 07:04:28.060922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.591 [2024-11-20 07:04:28.074729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.591 [2024-11-20 07:04:28.074748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.591 [2024-11-20 07:04:28.084024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.591 [2024-11-20 07:04:28.084043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.591 [2024-11-20 07:04:28.099175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.591 [2024-11-20 07:04:28.099195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.591 [2024-11-20 07:04:28.114218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.591 [2024-11-20 07:04:28.114236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.591 [2024-11-20 07:04:28.128724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.591 [2024-11-20 07:04:28.128742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.591 [2024-11-20 07:04:28.139595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.591 [2024-11-20 07:04:28.139613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.850 [2024-11-20 07:04:28.153711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.850 [2024-11-20 07:04:28.153730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.850 [2024-11-20 07:04:28.168105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.850 [2024-11-20 07:04:28.168125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.850 [2024-11-20 07:04:28.178925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.850 [2024-11-20 07:04:28.178944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.850 [2024-11-20 07:04:28.188519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.850 [2024-11-20 07:04:28.188539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.850 [2024-11-20 07:04:28.203143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.850 [2024-11-20 07:04:28.203163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.850 [2024-11-20 07:04:28.216954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.850 [2024-11-20 07:04:28.216973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.850 [2024-11-20 07:04:28.226492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.850 [2024-11-20 07:04:28.226511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.850 16445.00 IOPS, 128.48 MiB/s [2024-11-20T06:04:28.406Z] [2024-11-20 07:04:28.241271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.850 [2024-11-20 07:04:28.241290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.850 [2024-11-20 07:04:28.254902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.850 [2024-11-20 07:04:28.254921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.850 [2024-11-20 07:04:28.268799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.850 [2024-11-20 07:04:28.268818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.850 [2024-11-20 07:04:28.283366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.850 [2024-11-20 07:04:28.283385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.850 [2024-11-20 07:04:28.299152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.850 [2024-11-20 07:04:28.299172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.850 [2024-11-20 07:04:28.313707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.850 [2024-11-20 07:04:28.313726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.850 [2024-11-20 07:04:28.324509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.850 [2024-11-20 07:04:28.324528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.850 [2024-11-20 07:04:28.338862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.850 [2024-11-20 07:04:28.338881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.850 [2024-11-20 07:04:28.352065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.850 [2024-11-20 07:04:28.352084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.850 [2024-11-20 07:04:28.366940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.850 [2024-11-20 07:04:28.366968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.850 [2024-11-20 07:04:28.382614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.850 [2024-11-20 07:04:28.382633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:23.851 [2024-11-20 07:04:28.397229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:23.851 [2024-11-20 07:04:28.397249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.110 [2024-11-20 07:04:28.411161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.110 [2024-11-20 07:04:28.411180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.110 [2024-11-20 07:04:28.425433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.110 [2024-11-20 07:04:28.425453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.110 [2024-11-20 07:04:28.439864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.110 [2024-11-20 07:04:28.439884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.110 [2024-11-20 07:04:28.451035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.110 [2024-11-20 07:04:28.451054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.110 [2024-11-20 07:04:28.459869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.110 [2024-11-20 07:04:28.459888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.110 [2024-11-20 07:04:28.474142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.110 [2024-11-20 07:04:28.474162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.110 [2024-11-20 07:04:28.488454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.110 [2024-11-20 07:04:28.488473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.110 [2024-11-20 07:04:28.502642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.110 [2024-11-20 07:04:28.502661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.110 [2024-11-20 07:04:28.516678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.110 [2024-11-20 07:04:28.516697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.110 [2024-11-20 07:04:28.531225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.110 [2024-11-20 07:04:28.531244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.110 [2024-11-20 07:04:28.541800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.110 [2024-11-20 07:04:28.541818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.110 [2024-11-20 07:04:28.556042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.110 [2024-11-20 07:04:28.556060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.110 [2024-11-20 07:04:28.570161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.110 [2024-11-20 07:04:28.570180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.110 [2024-11-20 07:04:28.584591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.110 [2024-11-20 07:04:28.584610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.110 [2024-11-20 07:04:28.595898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.110 [2024-11-20 07:04:28.595917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.110 [2024-11-20 07:04:28.610665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.110 [2024-11-20 07:04:28.610684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.110 [2024-11-20 07:04:28.624994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.110 [2024-11-20 07:04:28.625012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.110 [2024-11-20 07:04:28.635780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.110 [2024-11-20 07:04:28.635799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.110 [2024-11-20 07:04:28.650074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.110 [2024-11-20 07:04:28.650093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.370 [2024-11-20 07:04:28.664688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.370 [2024-11-20 07:04:28.664706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.370 [2024-11-20 07:04:28.679997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.370 [2024-11-20 07:04:28.680015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.370 [2024-11-20 07:04:28.694464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.370 [2024-11-20 07:04:28.694483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.370 [2024-11-20 07:04:28.705417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.370 [2024-11-20 07:04:28.705436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.370 [2024-11-20 07:04:28.715093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.370 [2024-11-20 07:04:28.715112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.370 [2024-11-20 07:04:28.729543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.370 [2024-11-20 07:04:28.729561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.370 [2024-11-20 07:04:28.743002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.370 [2024-11-20 07:04:28.743022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.370 [2024-11-20 07:04:28.751972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.370 [2024-11-20 07:04:28.752007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.370 [2024-11-20 07:04:28.761330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.370 [2024-11-20 07:04:28.761349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.370 [2024-11-20 07:04:28.776096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.370 [2024-11-20 07:04:28.776115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.370 [2024-11-20 07:04:28.789995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.370 [2024-11-20 07:04:28.790014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.370 [2024-11-20 07:04:28.804380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.371 [2024-11-20 07:04:28.804404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.371 [2024-11-20 07:04:28.815273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.371 [2024-11-20 07:04:28.815292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.371 [2024-11-20 07:04:28.829827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.371 [2024-11-20 07:04:28.829847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.371 [2024-11-20 07:04:28.843920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.371 [2024-11-20 07:04:28.843939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.371 [2024-11-20 07:04:28.858120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.371 [2024-11-20 07:04:28.858140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.371 [2024-11-20 07:04:28.872248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.371 [2024-11-20 07:04:28.872267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.371 [2024-11-20 07:04:28.886488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.371 [2024-11-20 07:04:28.886506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.371 [2024-11-20 07:04:28.900650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.371 [2024-11-20 07:04:28.900669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.371 [2024-11-20 07:04:28.911195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.371 [2024-11-20 07:04:28.911213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.630 [2024-11-20 07:04:28.925853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.630 [2024-11-20 07:04:28.925871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.630 [2024-11-20 07:04:28.937259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.630 [2024-11-20 07:04:28.937277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.630 [2024-11-20 07:04:28.952049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.631 [2024-11-20 07:04:28.952067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.631 [2024-11-20 07:04:28.963177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.631 [2024-11-20 07:04:28.963195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.631 [2024-11-20 07:04:28.977510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.631 [2024-11-20 07:04:28.977529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.631 [2024-11-20 07:04:28.991161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.631 [2024-11-20 07:04:28.991180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.631 [2024-11-20 07:04:29.005445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.631 [2024-11-20 07:04:29.005464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.631 [2024-11-20 07:04:29.019186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.631 [2024-11-20 07:04:29.019205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.631 [2024-11-20 07:04:29.033470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.631 [2024-11-20 07:04:29.033488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.631 [2024-11-20 07:04:29.044347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.631 [2024-11-20 07:04:29.044365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.631 [2024-11-20 07:04:29.059496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.631 [2024-11-20 07:04:29.059518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.631 [2024-11-20 07:04:29.075007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.631 [2024-11-20 07:04:29.075025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.631 [2024-11-20 07:04:29.084584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.631 [2024-11-20 07:04:29.084602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.631 [2024-11-20 07:04:29.099022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.631 [2024-11-20 07:04:29.099041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.631 [2024-11-20 07:04:29.112616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.631 [2024-11-20 07:04:29.112635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.631 [2024-11-20 07:04:29.126997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.631 [2024-11-20 07:04:29.127015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.631 [2024-11-20 07:04:29.142907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.631 [2024-11-20 07:04:29.142926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.631 [2024-11-20 07:04:29.157938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.631 [2024-11-20 07:04:29.157962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.631 [2024-11-20 07:04:29.172975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.631 [2024-11-20 07:04:29.172995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.890 [2024-11-20 07:04:29.187895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.890 [2024-11-20 07:04:29.187915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.890 [2024-11-20 07:04:29.198741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.890 [2024-11-20 07:04:29.198759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.890 [2024-11-20 07:04:29.213804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.890 [2024-11-20 07:04:29.213822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.890 [2024-11-20 07:04:29.224650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.890 [2024-11-20 07:04:29.224668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.890 16461.00 IOPS, 128.60 MiB/s [2024-11-20T06:04:29.446Z] [2024-11-20 07:04:29.239271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.890 [2024-11-20 07:04:29.239290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.890 [2024-11-20 07:04:29.253158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.890 [2024-11-20 07:04:29.253177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.890 [2024-11-20 07:04:29.267400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.890 [2024-11-20 07:04:29.267420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.890 [2024-11-20 07:04:29.279055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.890 [2024-11-20 07:04:29.279074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.890 [2024-11-20 07:04:29.293092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.890 [2024-11-20 07:04:29.293110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.890 [2024-11-20 07:04:29.307193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.890 [2024-11-20 07:04:29.307212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.890 [2024-11-20 07:04:29.321674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.890 [2024-11-20 07:04:29.321698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.890 [2024-11-20 07:04:29.337241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.890 [2024-11-20 07:04:29.337259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.890 [2024-11-20 07:04:29.351715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.890 [2024-11-20 07:04:29.351735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.890 [2024-11-20 07:04:29.365689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.890 [2024-11-20 07:04:29.365708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.890 [2024-11-20 07:04:29.379609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.890 [2024-11-20 07:04:29.379627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.890 [2024-11-20 07:04:29.393937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.890 [2024-11-20 07:04:29.393961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.890 [2024-11-20 07:04:29.408868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.890 [2024-11-20 07:04:29.408886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.890 [2024-11-20 07:04:29.423359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.890 [2024-11-20 07:04:29.423377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.890 [2024-11-20 07:04:29.437073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.890 [2024-11-20 07:04:29.437091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.149 [2024-11-20 07:04:29.451715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.149 [2024-11-20 07:04:29.451734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.149 [2024-11-20 07:04:29.462701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.149 [2024-11-20 07:04:29.462720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.149 [2024-11-20 07:04:29.477351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.149 [2024-11-20 07:04:29.477370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.149 [2024-11-20 07:04:29.490318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.149 [2024-11-20 07:04:29.490337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.149 [2024-11-20 07:04:29.504825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.149 [2024-11-20 07:04:29.504844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.149 [2024-11-20 07:04:29.519296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.149 [2024-11-20 07:04:29.519314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.149 [2024-11-20 07:04:29.534504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.149 [2024-11-20 07:04:29.534522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.149 [2024-11-20 07:04:29.549354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.149 [2024-11-20 07:04:29.549373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.149 [2024-11-20 07:04:29.564749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.149 [2024-11-20 07:04:29.564770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.149 [2024-11-20 07:04:29.578821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.149 [2024-11-20 07:04:29.578840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.149 [2024-11-20 07:04:29.592604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.149 [2024-11-20 07:04:29.592625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.150 [2024-11-20 07:04:29.607003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.150 [2024-11-20 07:04:29.607024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.150 [2024-11-20 07:04:29.621320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.150 [2024-11-20 07:04:29.621339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.150 [2024-11-20 07:04:29.636683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.150 [2024-11-20 07:04:29.636702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.150 [2024-11-20 07:04:29.651594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.150 [2024-11-20 07:04:29.651614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.150 [2024-11-20 07:04:29.667164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.150 [2024-11-20 07:04:29.667184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.150 [2024-11-20 07:04:29.681393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.150 [2024-11-20 07:04:29.681412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.150 [2024-11-20 07:04:29.695761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.150 [2024-11-20 07:04:29.695780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.409 [2024-11-20 07:04:29.711467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.409 [2024-11-20 07:04:29.711487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.409 [2024-11-20 07:04:29.726006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.409 [2024-11-20 07:04:29.726025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.409 [2024-11-20 07:04:29.741027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.409 [2024-11-20 07:04:29.741046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.409 [2024-11-20 07:04:29.750692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.409 [2024-11-20 07:04:29.750710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.409 [2024-11-20 07:04:29.765726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.409 [2024-11-20 07:04:29.765745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.409 [2024-11-20 07:04:29.777078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.409 [2024-11-20 07:04:29.777097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.409 [2024-11-20 07:04:29.791854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.409 [2024-11-20 07:04:29.791874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.409 [2024-11-20 07:04:29.805770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.409 [2024-11-20 07:04:29.805790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.409 [2024-11-20 07:04:29.819939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.409 [2024-11-20 07:04:29.819965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.409 [2024-11-20 07:04:29.834136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.409 [2024-11-20 07:04:29.834156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.409 [2024-11-20 07:04:29.848138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.409 [2024-11-20 07:04:29.848159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.409 [2024-11-20 07:04:29.862188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.409 [2024-11-20 07:04:29.862208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.409 [2024-11-20 07:04:29.876482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.409 [2024-11-20 07:04:29.876502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.409 [2024-11-20 07:04:29.890734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.409 [2024-11-20 07:04:29.890753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.409 [2024-11-20 07:04:29.902305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.409 [2024-11-20 07:04:29.902324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.409 [2024-11-20 07:04:29.916660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.409 [2024-11-20 07:04:29.916679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.409 [2024-11-20 07:04:29.930684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.409 [2024-11-20 07:04:29.930703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.409 [2024-11-20 07:04:29.939828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.409 [2024-11-20 07:04:29.939848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.409 [2024-11-20 07:04:29.954257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.409 [2024-11-20 07:04:29.954277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.668 [2024-11-20 07:04:29.963233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.668 [2024-11-20 07:04:29.963253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.668 [2024-11-20 07:04:29.977931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.668 [2024-11-20 07:04:29.977957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.668 [2024-11-20 07:04:29.991681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.668 [2024-11-20 07:04:29.991700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.668 [2024-11-20 07:04:30.005755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.668 [2024-11-20 07:04:30.005774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.668 [2024-11-20 07:04:30.020401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.668 [2024-11-20 07:04:30.020420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.668 [2024-11-20 07:04:30.028136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.668 [2024-11-20 07:04:30.028154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.668 [2024-11-20 07:04:30.042793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.668 [2024-11-20 07:04:30.042812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.668 [2024-11-20 07:04:30.054182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.668 [2024-11-20 07:04:30.054202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.668 [2024-11-20 07:04:30.068619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.668 [2024-11-20 07:04:30.068638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.668 [2024-11-20 07:04:30.082870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.668 [2024-11-20 07:04:30.082889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.668 [2024-11-20 07:04:30.097255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.668 [2024-11-20 07:04:30.097273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.668 [2024-11-20 07:04:30.108085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.668 [2024-11-20 07:04:30.108104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.668 [2024-11-20 07:04:30.122778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.668 [2024-11-20 07:04:30.122798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.668 [2024-11-20 07:04:30.134157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.668 [2024-11-20 07:04:30.134177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.668 [2024-11-20 07:04:30.148731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.668 [2024-11-20 07:04:30.148750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.668 [2024-11-20 07:04:30.162789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.668 [2024-11-20 07:04:30.162808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.668 [2024-11-20 07:04:30.173689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.668 [2024-11-20 07:04:30.173707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.668 [2024-11-20 07:04:30.188193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.668 [2024-11-20 07:04:30.188211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.668 [2024-11-20 07:04:30.201622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.668 [2024-11-20 07:04:30.201641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.668 [2024-11-20 07:04:30.211258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.668 [2024-11-20 07:04:30.211277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.927 [2024-11-20 07:04:30.225826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.927 [2024-11-20 07:04:30.225846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.927 [2024-11-20 07:04:30.239672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.927 [2024-11-20 07:04:30.239692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.927 16463.20 IOPS, 128.62 MiB/s 00:08:25.927 Latency(us) 00:08:25.927 [2024-11-20T06:04:30.483Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.927 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:25.927 Nvme1n1 : 5.01 16466.57 128.65 0.00 0.00 7766.39 3561.74 18692.01 00:08:25.927 [2024-11-20T06:04:30.483Z] =================================================================================================================== 00:08:25.927 [2024-11-20T06:04:30.483Z] Total : 16466.57 128.65 0.00 0.00 7766.39 3561.74 18692.01 00:08:25.927 [2024-11-20 07:04:30.249748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.927 [2024-11-20 07:04:30.249766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.927 [2024-11-20 07:04:30.261777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.927 [2024-11-20 07:04:30.261792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.928 [2024-11-20 07:04:30.273819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.928 [2024-11-20 07:04:30.273834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.928 [2024-11-20 07:04:30.285842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.928 [2024-11-20 07:04:30.285859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.928 [2024-11-20 07:04:30.297870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.928 [2024-11-20 07:04:30.297894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.928 [2024-11-20 07:04:30.309907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.928 [2024-11-20 07:04:30.309921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.928 [2024-11-20 07:04:30.321942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.928 [2024-11-20 07:04:30.321961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.928 [2024-11-20 07:04:30.333971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.928 [2024-11-20 07:04:30.334011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.928 [2024-11-20 07:04:30.346021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.928 [2024-11-20 07:04:30.346038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.928 [2024-11-20 07:04:30.358029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.928 [2024-11-20 07:04:30.358041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.928 [2024-11-20 07:04:30.370064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.928 [2024-11-20 07:04:30.370073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.928 [2024-11-20 07:04:30.382099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.928 [2024-11-20 07:04:30.382111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.928 [2024-11-20 07:04:30.394130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.928 [2024-11-20 07:04:30.394141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.928 [2024-11-20 07:04:30.406189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.928 [2024-11-20 07:04:30.406202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1066770) - No such process 00:08:25.928 07:04:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1066770 00:08:25.928 07:04:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.928 07:04:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.928 07:04:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:25.928 07:04:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.928 07:04:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:25.928 07:04:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.928 07:04:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:25.928 delay0 00:08:25.928 07:04:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.928 07:04:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:25.928 07:04:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.928 07:04:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:25.928 07:04:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.928 07:04:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:26.186 [2024-11-20 07:04:30.600088] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:34.300 Initializing NVMe Controllers 00:08:34.300 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:34.300 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:34.300 Initialization complete. Launching workers. 00:08:34.300 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 5799 00:08:34.300 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 6079, failed to submit 40 00:08:34.300 success 5890, unsuccessful 189, failed 0 00:08:34.300 07:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:34.300 07:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:34.300 07:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:34.300 07:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:34.300 07:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:34.300 07:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:34.300 07:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:34.300 07:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:34.300 rmmod nvme_tcp 00:08:34.300 rmmod nvme_fabrics 00:08:34.300 rmmod nvme_keyring 00:08:34.300 07:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:34.300 07:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:34.300 07:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:34.300 07:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1064891 ']' 00:08:34.300 07:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1064891 00:08:34.300 07:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 1064891 ']' 00:08:34.300 07:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 1064891 00:08:34.300 07:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:08:34.300 07:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:34.300 07:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1064891 00:08:34.300 07:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:34.300 07:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:34.300 07:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1064891' 00:08:34.300 killing process with pid 1064891 00:08:34.300 07:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 1064891 00:08:34.300 07:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 1064891 00:08:34.300 07:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:34.300 07:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:34.300 07:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:34.300 07:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:34.300 07:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:34.300 07:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:34.300 07:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:34.300 07:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:34.300 07:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:34.300 07:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.300 07:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:34.300 07:04:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.680 07:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:35.680 00:08:35.680 real 0m32.736s 00:08:35.680 user 0m43.903s 00:08:35.680 sys 0m11.327s 00:08:35.680 07:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:35.680 07:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:35.680 ************************************ 00:08:35.680 END TEST nvmf_zcopy 00:08:35.680 ************************************ 00:08:35.680 07:04:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:35.680 07:04:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:35.680 07:04:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:35.680 07:04:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:35.680 ************************************ 00:08:35.680 START TEST nvmf_nmic 00:08:35.680 ************************************ 00:08:35.680 07:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:35.680 * Looking for test storage... 00:08:35.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:35.680 07:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:35.680 07:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:08:35.680 07:04:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:35.680 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:35.680 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:35.680 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:35.680 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:35.680 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:35.680 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:35.680 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:35.680 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:35.680 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:35.680 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:35.680 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:35.680 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:35.680 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:35.680 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:35.680 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:35.680 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:35.680 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:35.680 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:35.680 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:35.680 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:35.680 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:35.680 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:35.680 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:35.680 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:35.680 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:35.680 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:35.680 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:35.680 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:35.680 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:35.680 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:35.680 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:35.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.680 --rc genhtml_branch_coverage=1 00:08:35.680 --rc genhtml_function_coverage=1 00:08:35.680 --rc genhtml_legend=1 00:08:35.680 --rc geninfo_all_blocks=1 00:08:35.680 --rc geninfo_unexecuted_blocks=1 00:08:35.680 00:08:35.680 ' 00:08:35.680 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:35.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.680 --rc genhtml_branch_coverage=1 00:08:35.680 --rc genhtml_function_coverage=1 00:08:35.680 --rc genhtml_legend=1 00:08:35.680 --rc geninfo_all_blocks=1 00:08:35.680 --rc geninfo_unexecuted_blocks=1 00:08:35.680 00:08:35.680 ' 00:08:35.680 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:35.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.680 --rc genhtml_branch_coverage=1 00:08:35.680 --rc genhtml_function_coverage=1 00:08:35.680 --rc genhtml_legend=1 00:08:35.680 --rc geninfo_all_blocks=1 00:08:35.680 --rc geninfo_unexecuted_blocks=1 00:08:35.680 00:08:35.680 ' 00:08:35.680 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:35.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.680 --rc genhtml_branch_coverage=1 00:08:35.680 --rc genhtml_function_coverage=1 00:08:35.680 --rc genhtml_legend=1 00:08:35.680 --rc geninfo_all_blocks=1 00:08:35.680 --rc geninfo_unexecuted_blocks=1 00:08:35.680 00:08:35.680 ' 00:08:35.680 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:35.680 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:35.680 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:35.680 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:35.680 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:35.680 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:35.680 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:35.680 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:35.680 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:35.680 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:35.680 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:35.681 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:35.681 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:35.681 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:35.681 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:35.681 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:35.681 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:35.681 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:35.681 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:35.681 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:35.681 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:35.681 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:35.681 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:35.681 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.681 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.681 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.681 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:35.681 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.681 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:35.681 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:35.681 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:35.681 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:35.681 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:35.681 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:35.681 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:35.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:35.681 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:35.681 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:35.681 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:35.681 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:35.681 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:35.681 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:35.681 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:35.681 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:35.681 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:35.681 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:35.681 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:35.681 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.681 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:35.681 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.681 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:35.681 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:35.681 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:35.681 07:04:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:42.257 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:42.257 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:08:42.257 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:42.257 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:42.257 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:42.257 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:42.257 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:42.257 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:08:42.257 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:42.257 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:08:42.257 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:08:42.257 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:08:42.257 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:08:42.257 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:08:42.257 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:08:42.257 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:42.257 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:42.257 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:42.257 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:42.257 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:42.257 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:42.257 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:42.257 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:42.257 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:42.257 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:42.257 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:42.257 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:42.257 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:42.257 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:42.257 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:42.257 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:42.257 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:42.258 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:42.258 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:42.258 Found net devices under 0000:86:00.0: cvl_0_0 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:42.258 Found net devices under 0000:86:00.1: cvl_0_1 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:42.258 07:04:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:42.258 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:42.258 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:42.258 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:42.258 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.478 ms 00:08:42.258 00:08:42.258 --- 10.0.0.2 ping statistics --- 00:08:42.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.258 rtt min/avg/max/mdev = 0.478/0.478/0.478/0.000 ms 00:08:42.258 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:42.258 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:42.258 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:08:42.258 00:08:42.258 --- 10.0.0.1 ping statistics --- 00:08:42.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.258 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:08:42.258 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:42.258 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:08:42.258 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:42.258 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:42.258 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:42.258 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1072365 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1072365 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 1072365 ']' 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:42.259 [2024-11-20 07:04:46.118484] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:08:42.259 [2024-11-20 07:04:46.118534] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:42.259 [2024-11-20 07:04:46.197198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:42.259 [2024-11-20 07:04:46.241822] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:42.259 [2024-11-20 07:04:46.241860] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:42.259 [2024-11-20 07:04:46.241867] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:42.259 [2024-11-20 07:04:46.241872] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:42.259 [2024-11-20 07:04:46.241877] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:42.259 [2024-11-20 07:04:46.243351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.259 [2024-11-20 07:04:46.243463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:42.259 [2024-11-20 07:04:46.243491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.259 [2024-11-20 07:04:46.243491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:42.259 [2024-11-20 07:04:46.382871] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:42.259 Malloc0 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:42.259 [2024-11-20 07:04:46.458533] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:42.259 test case1: single bdev can't be used in multiple subsystems 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.259 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:42.259 [2024-11-20 07:04:46.486468] bdev.c:8321:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:42.259 [2024-11-20 07:04:46.486492] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:42.259 [2024-11-20 07:04:46.486500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.259 request: 00:08:42.259 { 00:08:42.259 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:42.259 "namespace": { 00:08:42.259 "bdev_name": "Malloc0", 00:08:42.259 "no_auto_visible": false 00:08:42.259 }, 00:08:42.259 "method": "nvmf_subsystem_add_ns", 00:08:42.259 "req_id": 1 00:08:42.259 } 00:08:42.260 Got JSON-RPC error response 00:08:42.260 response: 00:08:42.260 { 00:08:42.260 "code": -32602, 00:08:42.260 "message": "Invalid parameters" 00:08:42.260 } 00:08:42.260 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:42.260 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:42.260 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:42.260 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:42.260 Adding namespace failed - expected result. 00:08:42.260 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:42.260 test case2: host connect to nvmf target in multiple paths 00:08:42.260 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:42.260 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.260 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:42.260 [2024-11-20 07:04:46.498604] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:42.260 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.260 07:04:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:43.197 07:04:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:08:44.581 07:04:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:44.581 07:04:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:08:44.581 07:04:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:08:44.581 07:04:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:08:44.581 07:04:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:08:46.482 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:08:46.482 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:08:46.482 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:08:46.482 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:08:46.482 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:08:46.482 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:08:46.482 07:04:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:46.482 [global] 00:08:46.482 thread=1 00:08:46.482 invalidate=1 00:08:46.482 rw=write 00:08:46.482 time_based=1 00:08:46.482 runtime=1 00:08:46.482 ioengine=libaio 00:08:46.482 direct=1 00:08:46.482 bs=4096 00:08:46.482 iodepth=1 00:08:46.482 norandommap=0 00:08:46.482 numjobs=1 00:08:46.482 00:08:46.482 verify_dump=1 00:08:46.482 verify_backlog=512 00:08:46.482 verify_state_save=0 00:08:46.482 do_verify=1 00:08:46.482 verify=crc32c-intel 00:08:46.482 [job0] 00:08:46.482 filename=/dev/nvme0n1 00:08:46.482 Could not set queue depth (nvme0n1) 00:08:46.740 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:46.740 fio-3.35 00:08:46.740 Starting 1 thread 00:08:47.677 00:08:47.677 job0: (groupid=0, jobs=1): err= 0: pid=1073417: Wed Nov 20 07:04:52 2024 00:08:47.677 read: IOPS=22, BW=90.2KiB/s (92.4kB/s)(92.0KiB/1020msec) 00:08:47.677 slat (nsec): min=9434, max=22323, avg=21035.35, stdev=2542.05 00:08:47.677 clat (usec): min=40718, max=41088, avg=40958.29, stdev=74.80 00:08:47.677 lat (usec): min=40728, max=41109, avg=40979.32, stdev=76.52 00:08:47.677 clat percentiles (usec): 00:08:47.677 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:08:47.677 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:08:47.677 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:08:47.677 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:47.677 | 99.99th=[41157] 00:08:47.677 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:08:47.677 slat (nsec): min=9792, max=42005, avg=10813.28, stdev=2074.82 00:08:47.677 clat (usec): min=117, max=288, avg=137.71, stdev=17.69 00:08:47.677 lat (usec): min=127, max=327, avg=148.53, stdev=18.47 00:08:47.677 clat percentiles (usec): 00:08:47.677 | 1.00th=[ 121], 5.00th=[ 123], 10.00th=[ 124], 20.00th=[ 125], 00:08:47.677 | 30.00th=[ 127], 40.00th=[ 128], 50.00th=[ 129], 60.00th=[ 133], 00:08:47.677 | 70.00th=[ 145], 80.00th=[ 159], 90.00th=[ 163], 95.00th=[ 167], 00:08:47.677 | 99.00th=[ 176], 99.50th=[ 186], 99.90th=[ 289], 99.95th=[ 289], 00:08:47.677 | 99.99th=[ 289] 00:08:47.677 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:08:47.677 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:47.677 lat (usec) : 250=95.51%, 500=0.19% 00:08:47.677 lat (msec) : 50=4.30% 00:08:47.677 cpu : usr=0.00%, sys=1.28%, ctx=535, majf=0, minf=1 00:08:47.677 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:47.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:47.677 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:47.677 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:47.677 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:47.677 00:08:47.677 Run status group 0 (all jobs): 00:08:47.677 READ: bw=90.2KiB/s (92.4kB/s), 90.2KiB/s-90.2KiB/s (92.4kB/s-92.4kB/s), io=92.0KiB (94.2kB), run=1020-1020msec 00:08:47.677 WRITE: bw=2008KiB/s (2056kB/s), 2008KiB/s-2008KiB/s (2056kB/s-2056kB/s), io=2048KiB (2097kB), run=1020-1020msec 00:08:47.677 00:08:47.677 Disk stats (read/write): 00:08:47.677 nvme0n1: ios=70/512, merge=0/0, ticks=844/66, in_queue=910, util=91.18% 00:08:47.677 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:47.936 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:47.936 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:47.936 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:08:47.936 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:08:47.936 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:47.936 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:08:47.936 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:47.936 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:08:47.936 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:47.936 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:47.936 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:47.936 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:08:47.936 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:47.936 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:08:47.936 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:47.936 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:47.936 rmmod nvme_tcp 00:08:47.936 rmmod nvme_fabrics 00:08:47.936 rmmod nvme_keyring 00:08:47.936 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:47.936 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:08:47.936 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:08:47.936 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1072365 ']' 00:08:47.936 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1072365 00:08:47.936 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 1072365 ']' 00:08:47.936 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 1072365 00:08:47.937 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:08:47.937 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:47.937 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1072365 00:08:48.196 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:48.196 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:48.196 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1072365' 00:08:48.196 killing process with pid 1072365 00:08:48.196 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 1072365 00:08:48.196 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 1072365 00:08:48.196 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:48.196 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:48.196 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:48.196 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:08:48.196 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:08:48.196 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:08:48.196 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:48.196 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:48.196 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:48.196 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.196 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:48.196 07:04:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.732 07:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:50.732 00:08:50.732 real 0m14.848s 00:08:50.732 user 0m32.608s 00:08:50.732 sys 0m5.236s 00:08:50.732 07:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:50.732 07:04:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:50.732 ************************************ 00:08:50.732 END TEST nvmf_nmic 00:08:50.732 ************************************ 00:08:50.732 07:04:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:50.732 07:04:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:50.732 07:04:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:50.732 07:04:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:50.732 ************************************ 00:08:50.732 START TEST nvmf_fio_target 00:08:50.732 ************************************ 00:08:50.732 07:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:50.732 * Looking for test storage... 00:08:50.732 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:50.732 07:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:50.732 07:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:08:50.732 07:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:50.732 07:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:50.732 07:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:50.732 07:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:50.732 07:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:50.732 07:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:50.732 07:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:50.732 07:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:50.732 07:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:50.732 07:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:50.732 07:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:50.732 07:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:50.732 07:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:50.732 07:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:08:50.732 07:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:08:50.732 07:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:50.732 07:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:50.732 07:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:08:50.732 07:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:08:50.732 07:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:50.732 07:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:08:50.732 07:04:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:50.732 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:08:50.732 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:08:50.732 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:50.732 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:08:50.732 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:50.732 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:50.732 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:50.732 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:08:50.732 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:50.732 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:50.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.732 --rc genhtml_branch_coverage=1 00:08:50.732 --rc genhtml_function_coverage=1 00:08:50.732 --rc genhtml_legend=1 00:08:50.732 --rc geninfo_all_blocks=1 00:08:50.732 --rc geninfo_unexecuted_blocks=1 00:08:50.732 00:08:50.732 ' 00:08:50.732 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:50.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.732 --rc genhtml_branch_coverage=1 00:08:50.732 --rc genhtml_function_coverage=1 00:08:50.732 --rc genhtml_legend=1 00:08:50.732 --rc geninfo_all_blocks=1 00:08:50.732 --rc geninfo_unexecuted_blocks=1 00:08:50.732 00:08:50.732 ' 00:08:50.732 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:50.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.732 --rc genhtml_branch_coverage=1 00:08:50.732 --rc genhtml_function_coverage=1 00:08:50.733 --rc genhtml_legend=1 00:08:50.733 --rc geninfo_all_blocks=1 00:08:50.733 --rc geninfo_unexecuted_blocks=1 00:08:50.733 00:08:50.733 ' 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:50.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.733 --rc genhtml_branch_coverage=1 00:08:50.733 --rc genhtml_function_coverage=1 00:08:50.733 --rc genhtml_legend=1 00:08:50.733 --rc geninfo_all_blocks=1 00:08:50.733 --rc geninfo_unexecuted_blocks=1 00:08:50.733 00:08:50.733 ' 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:50.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:08:50.733 07:04:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:57.305 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:57.305 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:57.305 Found net devices under 0000:86:00.0: cvl_0_0 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:57.305 Found net devices under 0000:86:00.1: cvl_0_1 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:57.305 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:57.306 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:57.306 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:57.306 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:57.306 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:57.306 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:57.306 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:57.306 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:57.306 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:57.306 07:05:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:57.306 07:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:57.306 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:57.306 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:08:57.306 00:08:57.306 --- 10.0.0.2 ping statistics --- 00:08:57.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.306 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:08:57.306 07:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:57.306 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:57.306 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:08:57.306 00:08:57.306 --- 10.0.0.1 ping statistics --- 00:08:57.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.306 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:08:57.306 07:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:57.306 07:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:08:57.306 07:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:57.306 07:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:57.306 07:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:57.306 07:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:57.306 07:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:57.306 07:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:57.306 07:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:57.306 07:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:57.306 07:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:57.306 07:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:57.306 07:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:57.306 07:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1077208 00:08:57.306 07:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1077208 00:08:57.306 07:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:57.306 07:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 1077208 ']' 00:08:57.306 07:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.306 07:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:57.306 07:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.306 07:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:57.306 07:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:57.306 [2024-11-20 07:05:01.115472] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:08:57.306 [2024-11-20 07:05:01.115516] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.306 [2024-11-20 07:05:01.197581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:57.306 [2024-11-20 07:05:01.240843] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:57.306 [2024-11-20 07:05:01.240877] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:57.306 [2024-11-20 07:05:01.240884] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:57.306 [2024-11-20 07:05:01.240890] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:57.306 [2024-11-20 07:05:01.240896] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:57.306 [2024-11-20 07:05:01.242422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.306 [2024-11-20 07:05:01.242529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:57.306 [2024-11-20 07:05:01.242627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.306 [2024-11-20 07:05:01.242627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:57.565 07:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:57.565 07:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:08:57.565 07:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:57.565 07:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:57.565 07:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:57.565 07:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:57.565 07:05:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:57.823 [2024-11-20 07:05:02.159930] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:57.823 07:05:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:58.082 07:05:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:08:58.082 07:05:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:58.341 07:05:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:08:58.341 07:05:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:58.341 07:05:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:08:58.341 07:05:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:58.599 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:08:58.599 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:08:58.857 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:59.116 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:08:59.116 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:59.375 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:08:59.375 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:59.634 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:08:59.634 07:05:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:08:59.634 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:59.894 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:59.894 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:00.153 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:00.153 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:00.412 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:00.412 [2024-11-20 07:05:04.893973] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:00.412 07:05:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:00.671 07:05:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:00.932 07:05:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:02.314 07:05:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:02.314 07:05:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:09:02.314 07:05:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:02.314 07:05:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:09:02.314 07:05:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:09:02.314 07:05:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:09:04.218 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:04.218 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:04.218 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:04.218 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:09:04.218 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:04.218 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:09:04.218 07:05:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:04.219 [global] 00:09:04.219 thread=1 00:09:04.219 invalidate=1 00:09:04.219 rw=write 00:09:04.219 time_based=1 00:09:04.219 runtime=1 00:09:04.219 ioengine=libaio 00:09:04.219 direct=1 00:09:04.219 bs=4096 00:09:04.219 iodepth=1 00:09:04.219 norandommap=0 00:09:04.219 numjobs=1 00:09:04.219 00:09:04.219 verify_dump=1 00:09:04.219 verify_backlog=512 00:09:04.219 verify_state_save=0 00:09:04.219 do_verify=1 00:09:04.219 verify=crc32c-intel 00:09:04.219 [job0] 00:09:04.219 filename=/dev/nvme0n1 00:09:04.219 [job1] 00:09:04.219 filename=/dev/nvme0n2 00:09:04.219 [job2] 00:09:04.219 filename=/dev/nvme0n3 00:09:04.219 [job3] 00:09:04.219 filename=/dev/nvme0n4 00:09:04.219 Could not set queue depth (nvme0n1) 00:09:04.219 Could not set queue depth (nvme0n2) 00:09:04.219 Could not set queue depth (nvme0n3) 00:09:04.219 Could not set queue depth (nvme0n4) 00:09:04.478 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:04.478 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:04.478 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:04.478 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:04.478 fio-3.35 00:09:04.478 Starting 4 threads 00:09:05.857 00:09:05.857 job0: (groupid=0, jobs=1): err= 0: pid=1078566: Wed Nov 20 07:05:10 2024 00:09:05.857 read: IOPS=1691, BW=6765KiB/s (6928kB/s)(6772KiB/1001msec) 00:09:05.857 slat (nsec): min=7350, max=45753, avg=9650.40, stdev=2107.93 00:09:05.857 clat (usec): min=160, max=41309, avg=347.73, stdev=1978.66 00:09:05.857 lat (usec): min=168, max=41321, avg=357.38, stdev=1979.05 00:09:05.857 clat percentiles (usec): 00:09:05.857 | 1.00th=[ 178], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 206], 00:09:05.857 | 30.00th=[ 229], 40.00th=[ 241], 50.00th=[ 249], 60.00th=[ 260], 00:09:05.857 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 306], 95.00th=[ 314], 00:09:05.857 | 99.00th=[ 379], 99.50th=[ 457], 99.90th=[41157], 99.95th=[41157], 00:09:05.857 | 99.99th=[41157] 00:09:05.857 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:05.857 slat (nsec): min=10702, max=39371, avg=13329.11, stdev=2128.61 00:09:05.857 clat (usec): min=117, max=3276, avg=173.83, stdev=75.40 00:09:05.857 lat (usec): min=129, max=3290, avg=187.16, stdev=75.68 00:09:05.857 clat percentiles (usec): 00:09:05.857 | 1.00th=[ 127], 5.00th=[ 135], 10.00th=[ 141], 20.00th=[ 153], 00:09:05.857 | 30.00th=[ 161], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 178], 00:09:05.857 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 198], 95.00th=[ 215], 00:09:05.857 | 99.00th=[ 245], 99.50th=[ 260], 99.90th=[ 388], 99.95th=[ 1057], 00:09:05.857 | 99.99th=[ 3261] 00:09:05.857 bw ( KiB/s): min= 7256, max= 7256, per=36.39%, avg=7256.00, stdev= 0.00, samples=1 00:09:05.857 iops : min= 1814, max= 1814, avg=1814.00, stdev= 0.00, samples=1 00:09:05.857 lat (usec) : 250=77.04%, 500=22.77% 00:09:05.857 lat (msec) : 2=0.05%, 4=0.03%, 50=0.11% 00:09:05.857 cpu : usr=2.80%, sys=6.90%, ctx=3743, majf=0, minf=1 00:09:05.857 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:05.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.857 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.857 issued rwts: total=1693,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:05.857 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:05.857 job1: (groupid=0, jobs=1): err= 0: pid=1078567: Wed Nov 20 07:05:10 2024 00:09:05.857 read: IOPS=1862, BW=7451KiB/s (7630kB/s)(7652KiB/1027msec) 00:09:05.857 slat (nsec): min=6816, max=39676, avg=7782.00, stdev=1609.58 00:09:05.857 clat (usec): min=155, max=41127, avg=350.36, stdev=2277.85 00:09:05.857 lat (usec): min=162, max=41139, avg=358.14, stdev=2278.45 00:09:05.857 clat percentiles (usec): 00:09:05.857 | 1.00th=[ 172], 5.00th=[ 180], 10.00th=[ 186], 20.00th=[ 192], 00:09:05.857 | 30.00th=[ 198], 40.00th=[ 206], 50.00th=[ 217], 60.00th=[ 235], 00:09:05.857 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 258], 95.00th=[ 265], 00:09:05.857 | 99.00th=[ 379], 99.50th=[ 445], 99.90th=[41157], 99.95th=[41157], 00:09:05.857 | 99.99th=[41157] 00:09:05.857 write: IOPS=1994, BW=7977KiB/s (8168kB/s)(8192KiB/1027msec); 0 zone resets 00:09:05.857 slat (nsec): min=10260, max=41125, avg=11659.29, stdev=2024.68 00:09:05.857 clat (usec): min=104, max=2416, avg=149.02, stdev=59.11 00:09:05.857 lat (usec): min=115, max=2427, avg=160.68, stdev=59.40 00:09:05.857 clat percentiles (usec): 00:09:05.857 | 1.00th=[ 110], 5.00th=[ 116], 10.00th=[ 121], 20.00th=[ 127], 00:09:05.857 | 30.00th=[ 131], 40.00th=[ 135], 50.00th=[ 141], 60.00th=[ 145], 00:09:05.857 | 70.00th=[ 155], 80.00th=[ 176], 90.00th=[ 190], 95.00th=[ 198], 00:09:05.857 | 99.00th=[ 219], 99.50th=[ 237], 99.90th=[ 314], 99.95th=[ 889], 00:09:05.857 | 99.99th=[ 2409] 00:09:05.857 bw ( KiB/s): min= 6240, max=10144, per=41.08%, avg=8192.00, stdev=2760.54, samples=2 00:09:05.857 iops : min= 1560, max= 2536, avg=2048.00, stdev=690.14, samples=2 00:09:05.857 lat (usec) : 250=90.73%, 500=8.99%, 750=0.03%, 1000=0.08% 00:09:05.857 lat (msec) : 4=0.03%, 50=0.15% 00:09:05.857 cpu : usr=2.14%, sys=3.90%, ctx=3962, majf=0, minf=1 00:09:05.857 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:05.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.857 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.857 issued rwts: total=1913,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:05.857 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:05.857 job2: (groupid=0, jobs=1): err= 0: pid=1078571: Wed Nov 20 07:05:10 2024 00:09:05.857 read: IOPS=21, BW=87.0KiB/s (89.1kB/s)(88.0KiB/1011msec) 00:09:05.857 slat (nsec): min=12423, max=27773, avg=25152.55, stdev=3261.35 00:09:05.857 clat (usec): min=40725, max=41083, avg=40954.48, stdev=77.73 00:09:05.857 lat (usec): min=40737, max=41109, avg=40979.64, stdev=79.65 00:09:05.857 clat percentiles (usec): 00:09:05.857 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:05.857 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:05.857 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:05.857 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:05.857 | 99.99th=[41157] 00:09:05.857 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:09:05.857 slat (nsec): min=10850, max=44788, avg=13278.56, stdev=2808.62 00:09:05.857 clat (usec): min=154, max=337, avg=196.25, stdev=21.06 00:09:05.857 lat (usec): min=166, max=378, avg=209.53, stdev=21.49 00:09:05.857 clat percentiles (usec): 00:09:05.857 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 180], 00:09:05.857 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 198], 00:09:05.857 | 70.00th=[ 204], 80.00th=[ 210], 90.00th=[ 225], 95.00th=[ 233], 00:09:05.857 | 99.00th=[ 255], 99.50th=[ 269], 99.90th=[ 338], 99.95th=[ 338], 00:09:05.857 | 99.99th=[ 338] 00:09:05.857 bw ( KiB/s): min= 4096, max= 4096, per=20.54%, avg=4096.00, stdev= 0.00, samples=1 00:09:05.857 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:05.857 lat (usec) : 250=94.01%, 500=1.87% 00:09:05.857 lat (msec) : 50=4.12% 00:09:05.857 cpu : usr=0.99%, sys=0.40%, ctx=535, majf=0, minf=1 00:09:05.857 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:05.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.857 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.857 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:05.857 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:05.857 job3: (groupid=0, jobs=1): err= 0: pid=1078573: Wed Nov 20 07:05:10 2024 00:09:05.857 read: IOPS=21, BW=87.1KiB/s (89.2kB/s)(88.0KiB/1010msec) 00:09:05.857 slat (nsec): min=13038, max=27717, avg=24760.05, stdev=3067.67 00:09:05.858 clat (usec): min=40759, max=41133, avg=40960.71, stdev=81.07 00:09:05.858 lat (usec): min=40785, max=41161, avg=40985.47, stdev=81.97 00:09:05.858 clat percentiles (usec): 00:09:05.858 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:05.858 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:05.858 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:05.858 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:05.858 | 99.99th=[41157] 00:09:05.858 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:09:05.858 slat (nsec): min=11168, max=39022, avg=12941.14, stdev=2449.30 00:09:05.858 clat (usec): min=141, max=1612, avg=194.19, stdev=115.52 00:09:05.858 lat (usec): min=153, max=1631, avg=207.13, stdev=115.92 00:09:05.858 clat percentiles (usec): 00:09:05.858 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 169], 00:09:05.858 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:09:05.858 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 204], 95.00th=[ 217], 00:09:05.858 | 99.00th=[ 314], 99.50th=[ 1287], 99.90th=[ 1614], 99.95th=[ 1614], 00:09:05.858 | 99.99th=[ 1614] 00:09:05.858 bw ( KiB/s): min= 4096, max= 4096, per=20.54%, avg=4096.00, stdev= 0.00, samples=1 00:09:05.858 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:05.858 lat (usec) : 250=94.19%, 500=0.75% 00:09:05.858 lat (msec) : 2=0.94%, 50=4.12% 00:09:05.858 cpu : usr=0.69%, sys=0.69%, ctx=535, majf=0, minf=1 00:09:05.858 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:05.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.858 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.858 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:05.858 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:05.858 00:09:05.858 Run status group 0 (all jobs): 00:09:05.858 READ: bw=13.9MiB/s (14.6MB/s), 87.0KiB/s-7451KiB/s (89.1kB/s-7630kB/s), io=14.3MiB (14.9MB), run=1001-1027msec 00:09:05.858 WRITE: bw=19.5MiB/s (20.4MB/s), 2026KiB/s-8184KiB/s (2074kB/s-8380kB/s), io=20.0MiB (21.0MB), run=1001-1027msec 00:09:05.858 00:09:05.858 Disk stats (read/write): 00:09:05.858 nvme0n1: ios=1376/1536, merge=0/0, ticks=888/261, in_queue=1149, util=96.29% 00:09:05.858 nvme0n2: ios=1933/2048, merge=0/0, ticks=1436/293, in_queue=1729, util=96.60% 00:09:05.858 nvme0n3: ios=74/512, merge=0/0, ticks=1598/98, in_queue=1696, util=96.60% 00:09:05.858 nvme0n4: ios=74/512, merge=0/0, ticks=1070/96, in_queue=1166, util=96.68% 00:09:05.858 07:05:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:05.858 [global] 00:09:05.858 thread=1 00:09:05.858 invalidate=1 00:09:05.858 rw=randwrite 00:09:05.858 time_based=1 00:09:05.858 runtime=1 00:09:05.858 ioengine=libaio 00:09:05.858 direct=1 00:09:05.858 bs=4096 00:09:05.858 iodepth=1 00:09:05.858 norandommap=0 00:09:05.858 numjobs=1 00:09:05.858 00:09:05.858 verify_dump=1 00:09:05.858 verify_backlog=512 00:09:05.858 verify_state_save=0 00:09:05.858 do_verify=1 00:09:05.858 verify=crc32c-intel 00:09:05.858 [job0] 00:09:05.858 filename=/dev/nvme0n1 00:09:05.858 [job1] 00:09:05.858 filename=/dev/nvme0n2 00:09:05.858 [job2] 00:09:05.858 filename=/dev/nvme0n3 00:09:05.858 [job3] 00:09:05.858 filename=/dev/nvme0n4 00:09:05.858 Could not set queue depth (nvme0n1) 00:09:05.858 Could not set queue depth (nvme0n2) 00:09:05.858 Could not set queue depth (nvme0n3) 00:09:05.858 Could not set queue depth (nvme0n4) 00:09:06.117 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:06.117 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:06.117 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:06.117 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:06.117 fio-3.35 00:09:06.117 Starting 4 threads 00:09:07.499 00:09:07.499 job0: (groupid=0, jobs=1): err= 0: pid=1078983: Wed Nov 20 07:05:11 2024 00:09:07.499 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:07.499 slat (nsec): min=7329, max=36987, avg=9171.02, stdev=3239.83 00:09:07.499 clat (usec): min=187, max=41488, avg=1648.74, stdev=7336.58 00:09:07.499 lat (usec): min=195, max=41511, avg=1657.91, stdev=7338.58 00:09:07.499 clat percentiles (usec): 00:09:07.499 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 210], 20.00th=[ 227], 00:09:07.499 | 30.00th=[ 245], 40.00th=[ 265], 50.00th=[ 277], 60.00th=[ 281], 00:09:07.499 | 70.00th=[ 289], 80.00th=[ 293], 90.00th=[ 306], 95.00th=[ 355], 00:09:07.499 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:09:07.499 | 99.99th=[41681] 00:09:07.499 write: IOPS=676, BW=2705KiB/s (2770kB/s)(2708KiB/1001msec); 0 zone resets 00:09:07.499 slat (nsec): min=10281, max=49459, avg=12731.71, stdev=3017.85 00:09:07.499 clat (usec): min=128, max=1747, avg=203.40, stdev=86.41 00:09:07.499 lat (usec): min=141, max=1759, avg=216.14, stdev=86.91 00:09:07.499 clat percentiles (usec): 00:09:07.499 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 147], 20.00th=[ 163], 00:09:07.499 | 30.00th=[ 180], 40.00th=[ 188], 50.00th=[ 194], 60.00th=[ 202], 00:09:07.499 | 70.00th=[ 217], 80.00th=[ 235], 90.00th=[ 255], 95.00th=[ 269], 00:09:07.499 | 99.00th=[ 302], 99.50th=[ 635], 99.90th=[ 1745], 99.95th=[ 1745], 00:09:07.499 | 99.99th=[ 1745] 00:09:07.499 bw ( KiB/s): min= 4096, max= 4096, per=24.66%, avg=4096.00, stdev= 0.00, samples=1 00:09:07.499 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:07.499 lat (usec) : 250=64.17%, 500=33.81%, 750=0.25% 00:09:07.499 lat (msec) : 2=0.25%, 20=0.08%, 50=1.43% 00:09:07.499 cpu : usr=0.70%, sys=2.30%, ctx=1193, majf=0, minf=1 00:09:07.499 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:07.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.499 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.499 issued rwts: total=512,677,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:07.499 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:07.499 job1: (groupid=0, jobs=1): err= 0: pid=1079006: Wed Nov 20 07:05:11 2024 00:09:07.499 read: IOPS=1409, BW=5637KiB/s (5773kB/s)(5784KiB/1026msec) 00:09:07.499 slat (nsec): min=6934, max=25312, avg=8139.40, stdev=1823.11 00:09:07.499 clat (usec): min=171, max=41257, avg=493.85, stdev=3380.09 00:09:07.499 lat (usec): min=178, max=41266, avg=501.99, stdev=3380.55 00:09:07.499 clat percentiles (usec): 00:09:07.499 | 1.00th=[ 176], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 190], 00:09:07.499 | 30.00th=[ 196], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 208], 00:09:07.499 | 70.00th=[ 215], 80.00th=[ 225], 90.00th=[ 255], 95.00th=[ 265], 00:09:07.499 | 99.00th=[ 375], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:09:07.499 | 99.99th=[41157] 00:09:07.499 write: IOPS=1497, BW=5988KiB/s (6132kB/s)(6144KiB/1026msec); 0 zone resets 00:09:07.499 slat (nsec): min=9703, max=38057, avg=11061.84, stdev=1813.99 00:09:07.499 clat (usec): min=116, max=1900, avg=178.27, stdev=60.75 00:09:07.499 lat (usec): min=126, max=1916, avg=189.33, stdev=61.19 00:09:07.499 clat percentiles (usec): 00:09:07.499 | 1.00th=[ 123], 5.00th=[ 129], 10.00th=[ 133], 20.00th=[ 139], 00:09:07.499 | 30.00th=[ 145], 40.00th=[ 153], 50.00th=[ 163], 60.00th=[ 180], 00:09:07.499 | 70.00th=[ 196], 80.00th=[ 235], 90.00th=[ 243], 95.00th=[ 247], 00:09:07.499 | 99.00th=[ 253], 99.50th=[ 258], 99.90th=[ 326], 99.95th=[ 1893], 00:09:07.499 | 99.99th=[ 1893] 00:09:07.499 bw ( KiB/s): min= 4096, max= 8192, per=36.99%, avg=6144.00, stdev=2896.31, samples=2 00:09:07.499 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:09:07.499 lat (usec) : 250=93.26%, 500=6.34% 00:09:07.499 lat (msec) : 2=0.07%, 50=0.34% 00:09:07.499 cpu : usr=1.85%, sys=4.98%, ctx=2983, majf=0, minf=1 00:09:07.499 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:07.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.499 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.499 issued rwts: total=1446,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:07.499 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:07.499 job2: (groupid=0, jobs=1): err= 0: pid=1079040: Wed Nov 20 07:05:11 2024 00:09:07.499 read: IOPS=21, BW=85.9KiB/s (88.0kB/s)(88.0KiB/1024msec) 00:09:07.499 slat (nsec): min=12338, max=27017, avg=23730.18, stdev=2940.53 00:09:07.499 clat (usec): min=40831, max=41146, avg=40964.21, stdev=74.01 00:09:07.499 lat (usec): min=40854, max=41168, avg=40987.94, stdev=74.31 00:09:07.499 clat percentiles (usec): 00:09:07.499 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:07.499 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:07.499 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:07.499 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:07.499 | 99.99th=[41157] 00:09:07.499 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:09:07.499 slat (nsec): min=10773, max=42832, avg=12335.13, stdev=2173.81 00:09:07.499 clat (usec): min=134, max=320, avg=220.30, stdev=31.23 00:09:07.499 lat (usec): min=146, max=363, avg=232.64, stdev=31.35 00:09:07.499 clat percentiles (usec): 00:09:07.499 | 1.00th=[ 147], 5.00th=[ 172], 10.00th=[ 188], 20.00th=[ 196], 00:09:07.499 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 219], 60.00th=[ 231], 00:09:07.499 | 70.00th=[ 237], 80.00th=[ 245], 90.00th=[ 258], 95.00th=[ 273], 00:09:07.499 | 99.00th=[ 306], 99.50th=[ 314], 99.90th=[ 322], 99.95th=[ 322], 00:09:07.499 | 99.99th=[ 322] 00:09:07.499 bw ( KiB/s): min= 4096, max= 4096, per=24.66%, avg=4096.00, stdev= 0.00, samples=1 00:09:07.499 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:07.499 lat (usec) : 250=80.90%, 500=14.98% 00:09:07.499 lat (msec) : 50=4.12% 00:09:07.499 cpu : usr=0.39%, sys=0.98%, ctx=536, majf=0, minf=1 00:09:07.499 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:07.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.499 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.499 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:07.499 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:07.499 job3: (groupid=0, jobs=1): err= 0: pid=1079051: Wed Nov 20 07:05:11 2024 00:09:07.499 read: IOPS=1155, BW=4623KiB/s (4734kB/s)(4628KiB/1001msec) 00:09:07.499 slat (nsec): min=6351, max=30907, avg=7814.82, stdev=1883.47 00:09:07.500 clat (usec): min=151, max=42430, avg=617.61, stdev=3974.97 00:09:07.500 lat (usec): min=161, max=42438, avg=625.43, stdev=3975.50 00:09:07.500 clat percentiles (usec): 00:09:07.500 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 180], 00:09:07.500 | 30.00th=[ 184], 40.00th=[ 190], 50.00th=[ 198], 60.00th=[ 229], 00:09:07.500 | 70.00th=[ 243], 80.00th=[ 269], 90.00th=[ 367], 95.00th=[ 371], 00:09:07.500 | 99.00th=[ 433], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:09:07.500 | 99.99th=[42206] 00:09:07.500 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:07.500 slat (nsec): min=8754, max=40950, avg=9993.39, stdev=1450.78 00:09:07.500 clat (usec): min=113, max=366, avg=166.05, stdev=44.91 00:09:07.500 lat (usec): min=123, max=376, avg=176.04, stdev=45.13 00:09:07.500 clat percentiles (usec): 00:09:07.500 | 1.00th=[ 118], 5.00th=[ 122], 10.00th=[ 124], 20.00th=[ 129], 00:09:07.500 | 30.00th=[ 133], 40.00th=[ 139], 50.00th=[ 147], 60.00th=[ 153], 00:09:07.500 | 70.00th=[ 196], 80.00th=[ 212], 90.00th=[ 241], 95.00th=[ 249], 00:09:07.500 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 326], 99.95th=[ 367], 00:09:07.500 | 99.99th=[ 367] 00:09:07.500 bw ( KiB/s): min= 4096, max= 4096, per=24.66%, avg=4096.00, stdev= 0.00, samples=1 00:09:07.500 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:07.500 lat (usec) : 250=86.63%, 500=12.96% 00:09:07.500 lat (msec) : 50=0.41% 00:09:07.500 cpu : usr=1.30%, sys=2.50%, ctx=2693, majf=0, minf=2 00:09:07.500 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:07.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.500 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.500 issued rwts: total=1157,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:07.500 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:07.500 00:09:07.500 Run status group 0 (all jobs): 00:09:07.500 READ: bw=11.9MiB/s (12.5MB/s), 85.9KiB/s-5637KiB/s (88.0kB/s-5773kB/s), io=12.3MiB (12.8MB), run=1001-1026msec 00:09:07.500 WRITE: bw=16.2MiB/s (17.0MB/s), 2000KiB/s-6138KiB/s (2048kB/s-6285kB/s), io=16.6MiB (17.5MB), run=1001-1026msec 00:09:07.500 00:09:07.500 Disk stats (read/write): 00:09:07.500 nvme0n1: ios=57/512, merge=0/0, ticks=1567/107, in_queue=1674, util=88.98% 00:09:07.500 nvme0n2: ios=1307/1536, merge=0/0, ticks=512/253, in_queue=765, util=85.80% 00:09:07.500 nvme0n3: ios=54/512, merge=0/0, ticks=1579/111, in_queue=1690, util=97.17% 00:09:07.500 nvme0n4: ios=856/1024, merge=0/0, ticks=610/175, in_queue=785, util=89.49% 00:09:07.500 07:05:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:07.500 [global] 00:09:07.500 thread=1 00:09:07.500 invalidate=1 00:09:07.500 rw=write 00:09:07.500 time_based=1 00:09:07.500 runtime=1 00:09:07.500 ioengine=libaio 00:09:07.500 direct=1 00:09:07.500 bs=4096 00:09:07.500 iodepth=128 00:09:07.500 norandommap=0 00:09:07.500 numjobs=1 00:09:07.500 00:09:07.500 verify_dump=1 00:09:07.500 verify_backlog=512 00:09:07.500 verify_state_save=0 00:09:07.500 do_verify=1 00:09:07.500 verify=crc32c-intel 00:09:07.500 [job0] 00:09:07.500 filename=/dev/nvme0n1 00:09:07.500 [job1] 00:09:07.500 filename=/dev/nvme0n2 00:09:07.500 [job2] 00:09:07.500 filename=/dev/nvme0n3 00:09:07.500 [job3] 00:09:07.500 filename=/dev/nvme0n4 00:09:07.500 Could not set queue depth (nvme0n1) 00:09:07.500 Could not set queue depth (nvme0n2) 00:09:07.500 Could not set queue depth (nvme0n3) 00:09:07.500 Could not set queue depth (nvme0n4) 00:09:07.826 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:07.826 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:07.826 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:07.826 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:07.826 fio-3.35 00:09:07.826 Starting 4 threads 00:09:08.842 00:09:08.842 job0: (groupid=0, jobs=1): err= 0: pid=1079474: Wed Nov 20 07:05:13 2024 00:09:08.842 read: IOPS=4199, BW=16.4MiB/s (17.2MB/s)(16.5MiB/1004msec) 00:09:08.842 slat (nsec): min=1304, max=14326k, avg=100603.64, stdev=644761.02 00:09:08.842 clat (usec): min=1405, max=47586, avg=12608.56, stdev=5847.76 00:09:08.842 lat (usec): min=4380, max=47596, avg=12709.16, stdev=5895.00 00:09:08.842 clat percentiles (usec): 00:09:08.842 | 1.00th=[ 4621], 5.00th=[ 8029], 10.00th=[ 8979], 20.00th=[ 9896], 00:09:08.842 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10683], 60.00th=[11338], 00:09:08.842 | 70.00th=[12911], 80.00th=[13435], 90.00th=[16188], 95.00th=[25560], 00:09:08.842 | 99.00th=[41157], 99.50th=[41681], 99.90th=[45876], 99.95th=[45876], 00:09:08.842 | 99.99th=[47449] 00:09:08.842 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:09:08.842 slat (usec): min=2, max=42117, avg=119.03, stdev=1003.84 00:09:08.842 clat (usec): min=5123, max=51232, avg=14273.64, stdev=8193.88 00:09:08.842 lat (usec): min=5151, max=69770, avg=14392.67, stdev=8297.80 00:09:08.842 clat percentiles (usec): 00:09:08.842 | 1.00th=[ 6521], 5.00th=[ 8291], 10.00th=[ 8586], 20.00th=[ 9896], 00:09:08.842 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10552], 60.00th=[10683], 00:09:08.842 | 70.00th=[13435], 80.00th=[16909], 90.00th=[25297], 95.00th=[35914], 00:09:08.842 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44303], 99.95th=[46400], 00:09:08.842 | 99.99th=[51119] 00:09:08.842 bw ( KiB/s): min=14648, max=22152, per=27.32%, avg=18400.00, stdev=5306.13, samples=2 00:09:08.842 iops : min= 3662, max= 5538, avg=4600.00, stdev=1326.53, samples=2 00:09:08.842 lat (msec) : 2=0.01%, 10=23.59%, 20=64.18%, 50=12.21%, 100=0.01% 00:09:08.842 cpu : usr=3.29%, sys=6.48%, ctx=491, majf=0, minf=1 00:09:08.842 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:08.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.842 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:08.842 issued rwts: total=4216,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:08.842 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:08.842 job1: (groupid=0, jobs=1): err= 0: pid=1079489: Wed Nov 20 07:05:13 2024 00:09:08.842 read: IOPS=5029, BW=19.6MiB/s (20.6MB/s)(19.7MiB/1004msec) 00:09:08.842 slat (nsec): min=1117, max=13560k, avg=92319.81, stdev=685285.85 00:09:08.842 clat (usec): min=1528, max=31909, avg=12772.97, stdev=4278.23 00:09:08.842 lat (usec): min=1958, max=31932, avg=12865.29, stdev=4330.81 00:09:08.842 clat percentiles (usec): 00:09:08.842 | 1.00th=[ 2573], 5.00th=[ 5800], 10.00th=[ 7832], 20.00th=[ 8979], 00:09:08.842 | 30.00th=[10552], 40.00th=[11994], 50.00th=[12911], 60.00th=[13698], 00:09:08.842 | 70.00th=[15270], 80.00th=[16319], 90.00th=[18220], 95.00th=[19268], 00:09:08.842 | 99.00th=[21890], 99.50th=[22938], 99.90th=[23725], 99.95th=[29754], 00:09:08.842 | 99.99th=[31851] 00:09:08.842 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:09:08.842 slat (nsec): min=1952, max=9057.7k, avg=80045.22, stdev=571552.12 00:09:08.842 clat (usec): min=327, max=41725, avg=12299.89, stdev=6763.51 00:09:08.842 lat (usec): min=453, max=41732, avg=12379.93, stdev=6824.50 00:09:08.842 clat percentiles (usec): 00:09:08.842 | 1.00th=[ 1975], 5.00th=[ 3687], 10.00th=[ 5276], 20.00th=[ 7701], 00:09:08.842 | 30.00th=[ 8717], 40.00th=[ 9896], 50.00th=[10421], 60.00th=[13304], 00:09:08.842 | 70.00th=[13829], 80.00th=[15401], 90.00th=[20055], 95.00th=[27919], 00:09:08.842 | 99.00th=[34866], 99.50th=[38536], 99.90th=[41681], 99.95th=[41681], 00:09:08.842 | 99.99th=[41681] 00:09:08.842 bw ( KiB/s): min=19680, max=21280, per=30.41%, avg=20480.00, stdev=1131.37, samples=2 00:09:08.842 iops : min= 4920, max= 5320, avg=5120.00, stdev=282.84, samples=2 00:09:08.842 lat (usec) : 500=0.03%, 750=0.04%, 1000=0.06% 00:09:08.842 lat (msec) : 2=0.56%, 4=4.22%, 10=29.80%, 20=58.30%, 50=6.99% 00:09:08.842 cpu : usr=4.28%, sys=5.08%, ctx=384, majf=0, minf=2 00:09:08.842 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:08.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.842 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:08.842 issued rwts: total=5050,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:08.842 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:08.842 job2: (groupid=0, jobs=1): err= 0: pid=1079509: Wed Nov 20 07:05:13 2024 00:09:08.842 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:09:08.842 slat (nsec): min=1103, max=24805k, avg=186591.42, stdev=1168101.89 00:09:08.842 clat (usec): min=5351, max=71995, avg=23262.57, stdev=13469.30 00:09:08.842 lat (usec): min=5359, max=72010, avg=23449.16, stdev=13588.48 00:09:08.842 clat percentiles (usec): 00:09:08.842 | 1.00th=[ 7570], 5.00th=[10028], 10.00th=[10552], 20.00th=[11600], 00:09:08.842 | 30.00th=[12911], 40.00th=[16188], 50.00th=[17695], 60.00th=[20055], 00:09:08.842 | 70.00th=[32113], 80.00th=[39060], 90.00th=[42730], 95.00th=[46924], 00:09:08.842 | 99.00th=[62653], 99.50th=[62653], 99.90th=[63177], 99.95th=[69731], 00:09:08.842 | 99.99th=[71828] 00:09:08.842 write: IOPS=3209, BW=12.5MiB/s (13.1MB/s)(12.6MiB/1005msec); 0 zone resets 00:09:08.842 slat (usec): min=2, max=12605, avg=124.37, stdev=775.15 00:09:08.842 clat (usec): min=2340, max=67308, avg=17239.96, stdev=10018.14 00:09:08.842 lat (usec): min=3246, max=67310, avg=17364.33, stdev=10091.12 00:09:08.842 clat percentiles (usec): 00:09:08.842 | 1.00th=[ 4948], 5.00th=[ 7898], 10.00th=[ 8160], 20.00th=[ 8586], 00:09:08.842 | 30.00th=[11600], 40.00th=[12256], 50.00th=[12780], 60.00th=[15664], 00:09:08.842 | 70.00th=[19006], 80.00th=[26084], 90.00th=[33817], 95.00th=[39060], 00:09:08.842 | 99.00th=[44303], 99.50th=[48497], 99.90th=[49546], 99.95th=[67634], 00:09:08.842 | 99.99th=[67634] 00:09:08.842 bw ( KiB/s): min= 8400, max=16384, per=18.40%, avg=12392.00, stdev=5645.54, samples=2 00:09:08.842 iops : min= 2100, max= 4096, avg=3098.00, stdev=1411.39, samples=2 00:09:08.842 lat (msec) : 4=0.11%, 10=16.28%, 20=49.43%, 50=32.68%, 100=1.51% 00:09:08.842 cpu : usr=2.59%, sys=4.78%, ctx=262, majf=0, minf=1 00:09:08.842 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:08.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.842 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:08.842 issued rwts: total=3072,3226,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:08.842 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:08.842 job3: (groupid=0, jobs=1): err= 0: pid=1079515: Wed Nov 20 07:05:13 2024 00:09:08.842 read: IOPS=3496, BW=13.7MiB/s (14.3MB/s)(14.2MiB/1043msec) 00:09:08.842 slat (nsec): min=1130, max=13149k, avg=105827.56, stdev=703108.61 00:09:08.842 clat (usec): min=4735, max=77075, avg=14599.29, stdev=10155.22 00:09:08.842 lat (usec): min=4742, max=77084, avg=14705.12, stdev=10222.52 00:09:08.842 clat percentiles (usec): 00:09:08.842 | 1.00th=[ 4948], 5.00th=[ 7504], 10.00th=[ 8979], 20.00th=[10683], 00:09:08.842 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11994], 60.00th=[12780], 00:09:08.842 | 70.00th=[13829], 80.00th=[15270], 90.00th=[19006], 95.00th=[30016], 00:09:08.842 | 99.00th=[68682], 99.50th=[72877], 99.90th=[77071], 99.95th=[77071], 00:09:08.842 | 99.99th=[77071] 00:09:08.842 write: IOPS=4418, BW=17.3MiB/s (18.1MB/s)(18.0MiB/1043msec); 0 zone resets 00:09:08.842 slat (usec): min=2, max=61092, avg=115.55, stdev=1056.75 00:09:08.842 clat (usec): min=1613, max=73811, avg=14647.20, stdev=8993.84 00:09:08.843 lat (usec): min=1624, max=93356, avg=14762.75, stdev=9093.20 00:09:08.843 clat percentiles (usec): 00:09:08.843 | 1.00th=[ 2933], 5.00th=[ 6128], 10.00th=[ 8717], 20.00th=[11076], 00:09:08.843 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11863], 60.00th=[11994], 00:09:08.843 | 70.00th=[12256], 80.00th=[19268], 90.00th=[23987], 95.00th=[26870], 00:09:08.843 | 99.00th=[56361], 99.50th=[66323], 99.90th=[73925], 99.95th=[73925], 00:09:08.843 | 99.99th=[73925] 00:09:08.843 bw ( KiB/s): min=13112, max=23240, per=26.99%, avg=18176.00, stdev=7161.58, samples=2 00:09:08.843 iops : min= 3278, max= 5810, avg=4544.00, stdev=1790.39, samples=2 00:09:08.843 lat (msec) : 2=0.23%, 4=0.86%, 10=12.66%, 20=71.96%, 50=11.80% 00:09:08.843 lat (msec) : 100=2.50% 00:09:08.843 cpu : usr=2.50%, sys=3.93%, ctx=647, majf=0, minf=1 00:09:08.843 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:08.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:08.843 issued rwts: total=3647,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:08.843 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:08.843 00:09:08.843 Run status group 0 (all jobs): 00:09:08.843 READ: bw=59.9MiB/s (62.8MB/s), 11.9MiB/s-19.6MiB/s (12.5MB/s-20.6MB/s), io=62.4MiB (65.5MB), run=1004-1043msec 00:09:08.843 WRITE: bw=65.8MiB/s (69.0MB/s), 12.5MiB/s-19.9MiB/s (13.1MB/s-20.9MB/s), io=68.6MiB (71.9MB), run=1004-1043msec 00:09:08.843 00:09:08.843 Disk stats (read/write): 00:09:08.843 nvme0n1: ios=3419/3584, merge=0/0, ticks=22802/26263, in_queue=49065, util=91.68% 00:09:08.843 nvme0n2: ios=4195/4608, merge=0/0, ticks=44921/46709, in_queue=91630, util=96.45% 00:09:08.843 nvme0n3: ios=2755/3072, merge=0/0, ticks=28371/24675, in_queue=53046, util=95.73% 00:09:08.843 nvme0n4: ios=3092/3554, merge=0/0, ticks=36936/39205, in_queue=76141, util=99.48% 00:09:08.843 07:05:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:09.101 [global] 00:09:09.101 thread=1 00:09:09.101 invalidate=1 00:09:09.101 rw=randwrite 00:09:09.101 time_based=1 00:09:09.101 runtime=1 00:09:09.101 ioengine=libaio 00:09:09.101 direct=1 00:09:09.101 bs=4096 00:09:09.101 iodepth=128 00:09:09.101 norandommap=0 00:09:09.101 numjobs=1 00:09:09.101 00:09:09.101 verify_dump=1 00:09:09.101 verify_backlog=512 00:09:09.101 verify_state_save=0 00:09:09.101 do_verify=1 00:09:09.101 verify=crc32c-intel 00:09:09.101 [job0] 00:09:09.101 filename=/dev/nvme0n1 00:09:09.101 [job1] 00:09:09.101 filename=/dev/nvme0n2 00:09:09.101 [job2] 00:09:09.101 filename=/dev/nvme0n3 00:09:09.101 [job3] 00:09:09.101 filename=/dev/nvme0n4 00:09:09.101 Could not set queue depth (nvme0n1) 00:09:09.101 Could not set queue depth (nvme0n2) 00:09:09.101 Could not set queue depth (nvme0n3) 00:09:09.101 Could not set queue depth (nvme0n4) 00:09:09.365 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:09.365 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:09.365 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:09.365 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:09.365 fio-3.35 00:09:09.365 Starting 4 threads 00:09:10.744 00:09:10.744 job0: (groupid=0, jobs=1): err= 0: pid=1079913: Wed Nov 20 07:05:14 2024 00:09:10.744 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:09:10.744 slat (nsec): min=1561, max=13597k, avg=92467.51, stdev=596244.17 00:09:10.744 clat (usec): min=5848, max=39190, avg=11847.93, stdev=4002.29 00:09:10.744 lat (usec): min=5857, max=39201, avg=11940.40, stdev=4047.52 00:09:10.744 clat percentiles (usec): 00:09:10.744 | 1.00th=[ 8094], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[10028], 00:09:10.744 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[10945], 00:09:10.744 | 70.00th=[11338], 80.00th=[11994], 90.00th=[14222], 95.00th=[22414], 00:09:10.744 | 99.00th=[27132], 99.50th=[27132], 99.90th=[27132], 99.95th=[34341], 00:09:10.744 | 99.99th=[39060] 00:09:10.744 write: IOPS=5274, BW=20.6MiB/s (21.6MB/s)(20.7MiB/1003msec); 0 zone resets 00:09:10.744 slat (usec): min=2, max=29048, avg=92.37, stdev=771.36 00:09:10.744 clat (usec): min=1849, max=53764, avg=12556.04, stdev=5765.98 00:09:10.744 lat (usec): min=1858, max=53786, avg=12648.41, stdev=5837.43 00:09:10.744 clat percentiles (usec): 00:09:10.744 | 1.00th=[ 3458], 5.00th=[ 9110], 10.00th=[10028], 20.00th=[10159], 00:09:10.744 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10552], 60.00th=[10552], 00:09:10.744 | 70.00th=[10683], 80.00th=[11600], 90.00th=[23200], 95.00th=[24773], 00:09:10.744 | 99.00th=[30540], 99.50th=[36963], 99.90th=[41681], 99.95th=[45876], 00:09:10.744 | 99.99th=[53740] 00:09:10.744 bw ( KiB/s): min=18024, max=23280, per=29.40%, avg=20652.00, stdev=3716.55, samples=2 00:09:10.744 iops : min= 4506, max= 5820, avg=5163.00, stdev=929.14, samples=2 00:09:10.744 lat (msec) : 2=0.11%, 4=0.64%, 10=13.99%, 20=74.33%, 50=10.92% 00:09:10.744 lat (msec) : 100=0.01% 00:09:10.744 cpu : usr=3.99%, sys=6.19%, ctx=454, majf=0, minf=1 00:09:10.744 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:10.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:10.744 issued rwts: total=5120,5290,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.744 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:10.744 job1: (groupid=0, jobs=1): err= 0: pid=1079914: Wed Nov 20 07:05:14 2024 00:09:10.744 read: IOPS=5535, BW=21.6MiB/s (22.7MB/s)(21.7MiB/1003msec) 00:09:10.744 slat (nsec): min=1395, max=14186k, avg=90206.65, stdev=593779.97 00:09:10.744 clat (usec): min=1589, max=40950, avg=11151.17, stdev=4766.83 00:09:10.744 lat (usec): min=4814, max=40956, avg=11241.37, stdev=4806.07 00:09:10.744 clat percentiles (usec): 00:09:10.744 | 1.00th=[ 6521], 5.00th=[ 7439], 10.00th=[ 7963], 20.00th=[ 9634], 00:09:10.744 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10159], 60.00th=[10290], 00:09:10.744 | 70.00th=[10421], 80.00th=[11469], 90.00th=[13042], 95.00th=[18482], 00:09:10.744 | 99.00th=[40633], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:09:10.744 | 99.99th=[41157] 00:09:10.744 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:09:10.744 slat (usec): min=2, max=27177, avg=83.50, stdev=640.32 00:09:10.744 clat (usec): min=5833, max=51747, avg=11563.74, stdev=4606.10 00:09:10.744 lat (usec): min=5845, max=51778, avg=11647.24, stdev=4667.16 00:09:10.744 clat percentiles (usec): 00:09:10.744 | 1.00th=[ 6456], 5.00th=[ 8455], 10.00th=[ 9372], 20.00th=[ 9765], 00:09:10.744 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10290], 60.00th=[10290], 00:09:10.744 | 70.00th=[10421], 80.00th=[10552], 90.00th=[14091], 95.00th=[23987], 00:09:10.744 | 99.00th=[30540], 99.50th=[30540], 99.90th=[30540], 99.95th=[43779], 00:09:10.744 | 99.99th=[51643] 00:09:10.744 bw ( KiB/s): min=20480, max=24576, per=32.07%, avg=22528.00, stdev=2896.31, samples=2 00:09:10.744 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:09:10.744 lat (msec) : 2=0.01%, 10=33.16%, 20=59.98%, 50=6.84%, 100=0.01% 00:09:10.744 cpu : usr=3.69%, sys=5.89%, ctx=759, majf=0, minf=1 00:09:10.744 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:10.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:10.744 issued rwts: total=5552,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.744 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:10.744 job2: (groupid=0, jobs=1): err= 0: pid=1079915: Wed Nov 20 07:05:14 2024 00:09:10.744 read: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1011msec) 00:09:10.744 slat (nsec): min=1344, max=17196k, avg=159740.35, stdev=1107899.54 00:09:10.744 clat (usec): min=6140, max=56767, avg=18655.85, stdev=7594.19 00:09:10.744 lat (usec): min=6340, max=56778, avg=18815.59, stdev=7679.07 00:09:10.744 clat percentiles (usec): 00:09:10.744 | 1.00th=[ 8586], 5.00th=[11338], 10.00th=[11994], 20.00th=[13042], 00:09:10.744 | 30.00th=[13304], 40.00th=[14877], 50.00th=[16712], 60.00th=[17695], 00:09:10.744 | 70.00th=[21103], 80.00th=[23462], 90.00th=[28443], 95.00th=[35390], 00:09:10.744 | 99.00th=[45876], 99.50th=[46924], 99.90th=[56886], 99.95th=[56886], 00:09:10.744 | 99.99th=[56886] 00:09:10.744 write: IOPS=3210, BW=12.5MiB/s (13.1MB/s)(12.7MiB/1011msec); 0 zone resets 00:09:10.744 slat (usec): min=2, max=18626, avg=150.71, stdev=823.74 00:09:10.744 clat (usec): min=1412, max=56776, avg=21858.78, stdev=9354.72 00:09:10.744 lat (usec): min=1426, max=56801, avg=22009.50, stdev=9434.66 00:09:10.744 clat percentiles (usec): 00:09:10.744 | 1.00th=[ 5932], 5.00th=[ 7373], 10.00th=[10683], 20.00th=[11994], 00:09:10.744 | 30.00th=[18220], 40.00th=[22414], 50.00th=[23200], 60.00th=[23462], 00:09:10.744 | 70.00th=[23987], 80.00th=[25035], 90.00th=[34341], 95.00th=[40109], 00:09:10.744 | 99.00th=[50070], 99.50th=[51119], 99.90th=[53740], 99.95th=[56886], 00:09:10.744 | 99.99th=[56886] 00:09:10.744 bw ( KiB/s): min=12288, max=12664, per=17.76%, avg=12476.00, stdev=265.87, samples=2 00:09:10.744 iops : min= 3072, max= 3166, avg=3119.00, stdev=66.47, samples=2 00:09:10.744 lat (msec) : 2=0.03%, 4=0.19%, 10=4.51%, 20=46.85%, 50=47.53% 00:09:10.744 lat (msec) : 100=0.89% 00:09:10.744 cpu : usr=1.98%, sys=4.85%, ctx=327, majf=0, minf=2 00:09:10.744 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:10.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:10.744 issued rwts: total=3072,3246,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.744 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:10.744 job3: (groupid=0, jobs=1): err= 0: pid=1079916: Wed Nov 20 07:05:14 2024 00:09:10.744 read: IOPS=3108, BW=12.1MiB/s (12.7MB/s)(12.2MiB/1006msec) 00:09:10.744 slat (nsec): min=1395, max=24318k, avg=162551.32, stdev=1226305.42 00:09:10.744 clat (usec): min=4223, max=92210, avg=17767.44, stdev=13196.78 00:09:10.744 lat (usec): min=4233, max=92218, avg=17929.99, stdev=13340.53 00:09:10.744 clat percentiles (usec): 00:09:10.744 | 1.00th=[ 6063], 5.00th=[ 7635], 10.00th=[ 8848], 20.00th=[ 9241], 00:09:10.744 | 30.00th=[10552], 40.00th=[11338], 50.00th=[13698], 60.00th=[15533], 00:09:10.744 | 70.00th=[16188], 80.00th=[20317], 90.00th=[37487], 95.00th=[41157], 00:09:10.744 | 99.00th=[71828], 99.50th=[88605], 99.90th=[91751], 99.95th=[91751], 00:09:10.744 | 99.99th=[91751] 00:09:10.744 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:09:10.744 slat (usec): min=2, max=10564, avg=124.08, stdev=625.74 00:09:10.744 clat (usec): min=296, max=123315, avg=20064.49, stdev=18470.71 00:09:10.744 lat (usec): min=718, max=123323, avg=20188.57, stdev=18571.55 00:09:10.744 clat percentiles (msec): 00:09:10.744 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 7], 20.00th=[ 9], 00:09:10.744 | 30.00th=[ 10], 40.00th=[ 10], 50.00th=[ 14], 60.00th=[ 22], 00:09:10.744 | 70.00th=[ 24], 80.00th=[ 25], 90.00th=[ 41], 95.00th=[ 62], 00:09:10.744 | 99.00th=[ 102], 99.50th=[ 111], 99.90th=[ 124], 99.95th=[ 124], 00:09:10.744 | 99.99th=[ 124] 00:09:10.744 bw ( KiB/s): min= 8128, max=19960, per=20.00%, avg=14044.00, stdev=8366.49, samples=2 00:09:10.744 iops : min= 2032, max= 4990, avg=3511.00, stdev=2091.62, samples=2 00:09:10.744 lat (usec) : 500=0.01%, 750=0.06% 00:09:10.744 lat (msec) : 2=0.16%, 4=0.76%, 10=34.88%, 20=31.52%, 50=27.33% 00:09:10.744 lat (msec) : 100=4.59%, 250=0.69% 00:09:10.744 cpu : usr=2.49%, sys=4.58%, ctx=377, majf=0, minf=1 00:09:10.744 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:10.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:10.744 issued rwts: total=3127,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.744 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:10.744 00:09:10.744 Run status group 0 (all jobs): 00:09:10.745 READ: bw=65.2MiB/s (68.4MB/s), 11.9MiB/s-21.6MiB/s (12.4MB/s-22.7MB/s), io=65.9MiB (69.1MB), run=1003-1011msec 00:09:10.745 WRITE: bw=68.6MiB/s (71.9MB/s), 12.5MiB/s-21.9MiB/s (13.1MB/s-23.0MB/s), io=69.3MiB (72.7MB), run=1003-1011msec 00:09:10.745 00:09:10.745 Disk stats (read/write): 00:09:10.745 nvme0n1: ios=4148/4499, merge=0/0, ticks=23936/31517, in_queue=55453, util=98.10% 00:09:10.745 nvme0n2: ios=4657/4719, merge=0/0, ticks=29779/32581, in_queue=62360, util=98.17% 00:09:10.745 nvme0n3: ios=2560/2639, merge=0/0, ticks=47985/55649, in_queue=103634, util=88.97% 00:09:10.745 nvme0n4: ios=2995/3072, merge=0/0, ticks=35922/33720, in_queue=69642, util=98.11% 00:09:10.745 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:10.745 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1080112 00:09:10.745 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:10.745 07:05:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:10.745 [global] 00:09:10.745 thread=1 00:09:10.745 invalidate=1 00:09:10.745 rw=read 00:09:10.745 time_based=1 00:09:10.745 runtime=10 00:09:10.745 ioengine=libaio 00:09:10.745 direct=1 00:09:10.745 bs=4096 00:09:10.745 iodepth=1 00:09:10.745 norandommap=1 00:09:10.745 numjobs=1 00:09:10.745 00:09:10.745 [job0] 00:09:10.745 filename=/dev/nvme0n1 00:09:10.745 [job1] 00:09:10.745 filename=/dev/nvme0n2 00:09:10.745 [job2] 00:09:10.745 filename=/dev/nvme0n3 00:09:10.745 [job3] 00:09:10.745 filename=/dev/nvme0n4 00:09:10.745 Could not set queue depth (nvme0n1) 00:09:10.745 Could not set queue depth (nvme0n2) 00:09:10.745 Could not set queue depth (nvme0n3) 00:09:10.745 Could not set queue depth (nvme0n4) 00:09:11.003 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:11.003 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:11.003 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:11.003 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:11.003 fio-3.35 00:09:11.003 Starting 4 threads 00:09:13.539 07:05:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:13.797 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=10530816, buflen=4096 00:09:13.797 fio: pid=1080295, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:13.797 07:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:14.055 07:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:14.055 07:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:14.055 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=331776, buflen=4096 00:09:14.055 fio: pid=1080294, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:14.055 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=1957888, buflen=4096 00:09:14.055 fio: pid=1080292, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:14.055 07:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:14.055 07:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:14.315 07:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:14.315 07:05:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:14.315 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=1622016, buflen=4096 00:09:14.315 fio: pid=1080293, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:14.315 00:09:14.315 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1080292: Wed Nov 20 07:05:18 2024 00:09:14.315 read: IOPS=153, BW=611KiB/s (626kB/s)(1912KiB/3129msec) 00:09:14.315 slat (usec): min=6, max=843, avg=12.08, stdev=38.58 00:09:14.315 clat (usec): min=188, max=44355, avg=6485.24, stdev=14706.91 00:09:14.315 lat (usec): min=196, max=44377, avg=6497.30, stdev=14716.35 00:09:14.315 clat percentiles (usec): 00:09:14.315 | 1.00th=[ 202], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 237], 00:09:14.315 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 253], 00:09:14.315 | 70.00th=[ 260], 80.00th=[ 273], 90.00th=[41157], 95.00th=[41157], 00:09:14.315 | 99.00th=[41681], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:09:14.315 | 99.99th=[44303] 00:09:14.315 bw ( KiB/s): min= 93, max= 3312, per=15.00%, avg=632.83, stdev=1312.52, samples=6 00:09:14.315 iops : min= 23, max= 828, avg=158.17, stdev=328.15, samples=6 00:09:14.315 lat (usec) : 250=51.98%, 500=32.36%, 750=0.21% 00:09:14.315 lat (msec) : 50=15.24% 00:09:14.315 cpu : usr=0.06%, sys=0.29%, ctx=481, majf=0, minf=1 00:09:14.315 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:14.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.315 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.315 issued rwts: total=479,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:14.315 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:14.315 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1080293: Wed Nov 20 07:05:18 2024 00:09:14.315 read: IOPS=118, BW=473KiB/s (484kB/s)(1584KiB/3348msec) 00:09:14.315 slat (nsec): min=6875, max=62349, avg=11223.89, stdev=7024.78 00:09:14.315 clat (usec): min=194, max=41976, avg=8386.70, stdev=16316.87 00:09:14.315 lat (usec): min=201, max=42001, avg=8397.89, stdev=16323.10 00:09:14.315 clat percentiles (usec): 00:09:14.315 | 1.00th=[ 215], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 241], 00:09:14.315 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 255], 00:09:14.315 | 70.00th=[ 265], 80.00th=[ 570], 90.00th=[41157], 95.00th=[41157], 00:09:14.315 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:14.315 | 99.99th=[42206] 00:09:14.315 bw ( KiB/s): min= 93, max= 2616, per=12.25%, avg=516.83, stdev=1028.38, samples=6 00:09:14.315 iops : min= 23, max= 654, avg=129.17, stdev=257.12, samples=6 00:09:14.315 lat (usec) : 250=47.86%, 500=31.49%, 750=0.50% 00:09:14.315 lat (msec) : 50=19.90% 00:09:14.315 cpu : usr=0.12%, sys=0.21%, ctx=399, majf=0, minf=2 00:09:14.315 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:14.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.315 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.315 issued rwts: total=397,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:14.315 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:14.315 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1080294: Wed Nov 20 07:05:18 2024 00:09:14.315 read: IOPS=27, BW=110KiB/s (113kB/s)(324KiB/2940msec) 00:09:14.315 slat (nsec): min=7663, max=56978, avg=22106.59, stdev=5690.55 00:09:14.315 clat (usec): min=262, max=42071, avg=36004.02, stdev=13409.57 00:09:14.315 lat (usec): min=277, max=42095, avg=36026.12, stdev=13410.20 00:09:14.315 clat percentiles (usec): 00:09:14.315 | 1.00th=[ 265], 5.00th=[ 371], 10.00th=[ 660], 20.00th=[41157], 00:09:14.315 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:14.315 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:14.315 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:14.315 | 99.99th=[42206] 00:09:14.315 bw ( KiB/s): min= 96, max= 152, per=2.56%, avg=108.80, stdev=24.40, samples=5 00:09:14.315 iops : min= 24, max= 38, avg=27.20, stdev= 6.10, samples=5 00:09:14.315 lat (usec) : 500=8.54%, 750=2.44% 00:09:14.315 lat (msec) : 2=1.22%, 50=86.59% 00:09:14.315 cpu : usr=0.00%, sys=0.14%, ctx=83, majf=0, minf=1 00:09:14.315 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:14.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.315 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.315 issued rwts: total=82,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:14.315 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:14.315 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1080295: Wed Nov 20 07:05:18 2024 00:09:14.315 read: IOPS=945, BW=3779KiB/s (3870kB/s)(10.0MiB/2721msec) 00:09:14.315 slat (nsec): min=7007, max=38557, avg=8415.11, stdev=2607.46 00:09:14.315 clat (usec): min=169, max=42054, avg=1037.85, stdev=5735.92 00:09:14.315 lat (usec): min=177, max=42083, avg=1046.27, stdev=5737.68 00:09:14.315 clat percentiles (usec): 00:09:14.315 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 200], 00:09:14.315 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 210], 60.00th=[ 215], 00:09:14.315 | 70.00th=[ 217], 80.00th=[ 223], 90.00th=[ 229], 95.00th=[ 245], 00:09:14.315 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:14.315 | 99.99th=[42206] 00:09:14.315 bw ( KiB/s): min= 104, max=13040, per=64.14%, avg=2702.40, stdev=5778.90, samples=5 00:09:14.315 iops : min= 26, max= 3260, avg=675.60, stdev=1444.73, samples=5 00:09:14.315 lat (usec) : 250=95.33%, 500=2.45%, 750=0.12% 00:09:14.315 lat (msec) : 4=0.04%, 50=2.02% 00:09:14.315 cpu : usr=0.48%, sys=1.58%, ctx=2572, majf=0, minf=2 00:09:14.315 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:14.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.315 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.316 issued rwts: total=2572,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:14.316 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:14.316 00:09:14.316 Run status group 0 (all jobs): 00:09:14.316 READ: bw=4213KiB/s (4314kB/s), 110KiB/s-3779KiB/s (113kB/s-3870kB/s), io=13.8MiB (14.4MB), run=2721-3348msec 00:09:14.316 00:09:14.316 Disk stats (read/write): 00:09:14.316 nvme0n1: ios=490/0, merge=0/0, ticks=3395/0, in_queue=3395, util=96.55% 00:09:14.316 nvme0n2: ios=390/0, merge=0/0, ticks=3069/0, in_queue=3069, util=96.04% 00:09:14.316 nvme0n3: ios=78/0, merge=0/0, ticks=2836/0, in_queue=2836, util=96.52% 00:09:14.316 nvme0n4: ios=2147/0, merge=0/0, ticks=2558/0, in_queue=2558, util=96.48% 00:09:14.574 07:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:14.574 07:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:14.832 07:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:14.832 07:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:15.090 07:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:15.090 07:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:15.090 07:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:15.090 07:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:15.350 07:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:15.350 07:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1080112 00:09:15.350 07:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:15.350 07:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:15.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.610 07:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:15.610 07:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:09:15.610 07:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:09:15.610 07:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:15.610 07:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:09:15.610 07:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:15.610 07:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:09:15.610 07:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:15.610 07:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:15.610 nvmf hotplug test: fio failed as expected 00:09:15.610 07:05:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:15.869 07:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:15.869 07:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:15.869 07:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:15.869 07:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:15.869 07:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:15.869 07:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:15.869 07:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:15.869 07:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:15.869 07:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:15.869 07:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:15.869 07:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:15.869 rmmod nvme_tcp 00:09:15.869 rmmod nvme_fabrics 00:09:15.869 rmmod nvme_keyring 00:09:15.869 07:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:15.869 07:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:15.869 07:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:15.869 07:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1077208 ']' 00:09:15.869 07:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1077208 00:09:15.869 07:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 1077208 ']' 00:09:15.870 07:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 1077208 00:09:15.870 07:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:09:15.870 07:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:15.870 07:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1077208 00:09:15.870 07:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:15.870 07:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:15.870 07:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1077208' 00:09:15.870 killing process with pid 1077208 00:09:15.870 07:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 1077208 00:09:15.870 07:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 1077208 00:09:16.129 07:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:16.129 07:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:16.129 07:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:16.129 07:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:16.129 07:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:16.129 07:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:16.129 07:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:16.129 07:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:16.129 07:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:16.129 07:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.129 07:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:16.129 07:05:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.034 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:18.034 00:09:18.034 real 0m27.702s 00:09:18.034 user 1m49.428s 00:09:18.034 sys 0m8.193s 00:09:18.034 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:18.034 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:18.034 ************************************ 00:09:18.034 END TEST nvmf_fio_target 00:09:18.034 ************************************ 00:09:18.034 07:05:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:18.034 07:05:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:18.034 07:05:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:18.034 07:05:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:18.294 ************************************ 00:09:18.294 START TEST nvmf_bdevio 00:09:18.294 ************************************ 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:18.294 * Looking for test storage... 00:09:18.294 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:18.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.294 --rc genhtml_branch_coverage=1 00:09:18.294 --rc genhtml_function_coverage=1 00:09:18.294 --rc genhtml_legend=1 00:09:18.294 --rc geninfo_all_blocks=1 00:09:18.294 --rc geninfo_unexecuted_blocks=1 00:09:18.294 00:09:18.294 ' 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:18.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.294 --rc genhtml_branch_coverage=1 00:09:18.294 --rc genhtml_function_coverage=1 00:09:18.294 --rc genhtml_legend=1 00:09:18.294 --rc geninfo_all_blocks=1 00:09:18.294 --rc geninfo_unexecuted_blocks=1 00:09:18.294 00:09:18.294 ' 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:18.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.294 --rc genhtml_branch_coverage=1 00:09:18.294 --rc genhtml_function_coverage=1 00:09:18.294 --rc genhtml_legend=1 00:09:18.294 --rc geninfo_all_blocks=1 00:09:18.294 --rc geninfo_unexecuted_blocks=1 00:09:18.294 00:09:18.294 ' 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:18.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.294 --rc genhtml_branch_coverage=1 00:09:18.294 --rc genhtml_function_coverage=1 00:09:18.294 --rc genhtml_legend=1 00:09:18.294 --rc geninfo_all_blocks=1 00:09:18.294 --rc geninfo_unexecuted_blocks=1 00:09:18.294 00:09:18.294 ' 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.294 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:18.295 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:18.295 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:18.295 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:18.295 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:18.295 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:18.295 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:18.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:18.295 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:18.295 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:18.295 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:18.295 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:18.295 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:18.295 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:18.295 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:18.295 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:18.295 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:18.295 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:18.295 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:18.295 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.295 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:18.295 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.295 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:18.295 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:18.295 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:18.295 07:05:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:24.870 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:24.870 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:24.870 Found net devices under 0000:86:00.0: cvl_0_0 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:24.870 Found net devices under 0000:86:00.1: cvl_0_1 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:24.870 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:24.870 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.425 ms 00:09:24.870 00:09:24.870 --- 10.0.0.2 ping statistics --- 00:09:24.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.870 rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:24.870 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:24.870 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:09:24.870 00:09:24.870 --- 10.0.0.1 ping statistics --- 00:09:24.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.870 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1084551 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1084551 00:09:24.870 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 1084551 ']' 00:09:24.871 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.871 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:24.871 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.871 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:24.871 07:05:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:24.871 [2024-11-20 07:05:28.831982] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:09:24.871 [2024-11-20 07:05:28.832031] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.871 [2024-11-20 07:05:28.911405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:24.871 [2024-11-20 07:05:28.953784] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:24.871 [2024-11-20 07:05:28.953825] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:24.871 [2024-11-20 07:05:28.953832] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:24.871 [2024-11-20 07:05:28.953839] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:24.871 [2024-11-20 07:05:28.953844] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:24.871 [2024-11-20 07:05:28.955453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:24.871 [2024-11-20 07:05:28.955484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:24.871 [2024-11-20 07:05:28.955591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:24.871 [2024-11-20 07:05:28.955592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:24.871 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:24.871 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:09:24.871 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:24.871 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:24.871 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:24.871 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:24.871 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:24.871 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.871 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:24.871 [2024-11-20 07:05:29.101911] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:24.871 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.871 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:24.871 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.871 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:24.871 Malloc0 00:09:24.871 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.871 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:24.871 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.871 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:24.871 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.871 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:24.871 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.871 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:24.871 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.871 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:24.871 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.871 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:24.871 [2024-11-20 07:05:29.164923] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.871 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.871 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:24.871 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:24.871 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:24.871 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:24.871 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:24.871 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:24.871 { 00:09:24.871 "params": { 00:09:24.871 "name": "Nvme$subsystem", 00:09:24.871 "trtype": "$TEST_TRANSPORT", 00:09:24.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:24.871 "adrfam": "ipv4", 00:09:24.871 "trsvcid": "$NVMF_PORT", 00:09:24.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:24.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:24.871 "hdgst": ${hdgst:-false}, 00:09:24.871 "ddgst": ${ddgst:-false} 00:09:24.871 }, 00:09:24.871 "method": "bdev_nvme_attach_controller" 00:09:24.871 } 00:09:24.871 EOF 00:09:24.871 )") 00:09:24.871 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:24.871 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:24.871 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:24.871 07:05:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:24.871 "params": { 00:09:24.871 "name": "Nvme1", 00:09:24.871 "trtype": "tcp", 00:09:24.871 "traddr": "10.0.0.2", 00:09:24.871 "adrfam": "ipv4", 00:09:24.871 "trsvcid": "4420", 00:09:24.871 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:24.871 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:24.871 "hdgst": false, 00:09:24.871 "ddgst": false 00:09:24.871 }, 00:09:24.871 "method": "bdev_nvme_attach_controller" 00:09:24.871 }' 00:09:24.871 [2024-11-20 07:05:29.215224] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:09:24.871 [2024-11-20 07:05:29.215265] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1084784 ] 00:09:24.871 [2024-11-20 07:05:29.292246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:24.871 [2024-11-20 07:05:29.336404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.871 [2024-11-20 07:05:29.336508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.871 [2024-11-20 07:05:29.336509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:25.131 I/O targets: 00:09:25.131 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:25.131 00:09:25.131 00:09:25.131 CUnit - A unit testing framework for C - Version 2.1-3 00:09:25.131 http://cunit.sourceforge.net/ 00:09:25.131 00:09:25.131 00:09:25.131 Suite: bdevio tests on: Nvme1n1 00:09:25.390 Test: blockdev write read block ...passed 00:09:25.390 Test: blockdev write zeroes read block ...passed 00:09:25.390 Test: blockdev write zeroes read no split ...passed 00:09:25.390 Test: blockdev write zeroes read split ...passed 00:09:25.390 Test: blockdev write zeroes read split partial ...passed 00:09:25.390 Test: blockdev reset ...[2024-11-20 07:05:29.770297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:25.390 [2024-11-20 07:05:29.770366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b95340 (9): Bad file descriptor 00:09:25.390 [2024-11-20 07:05:29.782875] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:25.390 passed 00:09:25.390 Test: blockdev write read 8 blocks ...passed 00:09:25.390 Test: blockdev write read size > 128k ...passed 00:09:25.390 Test: blockdev write read invalid size ...passed 00:09:25.390 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:25.390 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:25.390 Test: blockdev write read max offset ...passed 00:09:25.390 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:25.390 Test: blockdev writev readv 8 blocks ...passed 00:09:25.390 Test: blockdev writev readv 30 x 1block ...passed 00:09:25.649 Test: blockdev writev readv block ...passed 00:09:25.649 Test: blockdev writev readv size > 128k ...passed 00:09:25.649 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:25.649 Test: blockdev comparev and writev ...[2024-11-20 07:05:29.952496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:25.649 [2024-11-20 07:05:29.952524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:25.649 [2024-11-20 07:05:29.952539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:25.649 [2024-11-20 07:05:29.952547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:25.649 [2024-11-20 07:05:29.952790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:25.649 [2024-11-20 07:05:29.952800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:25.649 [2024-11-20 07:05:29.952812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:25.649 [2024-11-20 07:05:29.952819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:25.649 [2024-11-20 07:05:29.953051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:25.649 [2024-11-20 07:05:29.953061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:25.649 [2024-11-20 07:05:29.953074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:25.649 [2024-11-20 07:05:29.953081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:25.649 [2024-11-20 07:05:29.953314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:25.649 [2024-11-20 07:05:29.953323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:25.649 [2024-11-20 07:05:29.953335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:25.649 [2024-11-20 07:05:29.953342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:25.649 passed 00:09:25.649 Test: blockdev nvme passthru rw ...passed 00:09:25.649 Test: blockdev nvme passthru vendor specific ...[2024-11-20 07:05:30.035280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:25.649 [2024-11-20 07:05:30.035298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:25.649 [2024-11-20 07:05:30.035412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:25.649 [2024-11-20 07:05:30.035422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:25.649 [2024-11-20 07:05:30.035523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:25.649 [2024-11-20 07:05:30.035533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:25.649 [2024-11-20 07:05:30.035636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:25.649 [2024-11-20 07:05:30.035646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:25.649 passed 00:09:25.649 Test: blockdev nvme admin passthru ...passed 00:09:25.649 Test: blockdev copy ...passed 00:09:25.649 00:09:25.649 Run Summary: Type Total Ran Passed Failed Inactive 00:09:25.649 suites 1 1 n/a 0 0 00:09:25.649 tests 23 23 23 0 0 00:09:25.649 asserts 152 152 152 0 n/a 00:09:25.649 00:09:25.649 Elapsed time = 0.880 seconds 00:09:25.909 07:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:25.909 07:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.909 07:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:25.909 07:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.909 07:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:25.909 07:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:25.909 07:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:25.909 07:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:25.909 07:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:25.909 07:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:25.909 07:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:25.909 07:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:25.909 rmmod nvme_tcp 00:09:25.909 rmmod nvme_fabrics 00:09:25.909 rmmod nvme_keyring 00:09:25.909 07:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:25.909 07:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:25.909 07:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:25.909 07:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1084551 ']' 00:09:25.909 07:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1084551 00:09:25.909 07:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 1084551 ']' 00:09:25.909 07:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 1084551 00:09:25.909 07:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:09:25.909 07:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:25.909 07:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1084551 00:09:25.909 07:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:09:25.909 07:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:09:25.909 07:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1084551' 00:09:25.909 killing process with pid 1084551 00:09:25.909 07:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 1084551 00:09:25.909 07:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 1084551 00:09:26.169 07:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:26.169 07:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:26.169 07:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:26.169 07:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:26.169 07:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:26.169 07:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:26.169 07:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:26.169 07:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:26.169 07:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:26.169 07:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.169 07:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:26.169 07:05:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.075 07:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:28.075 00:09:28.075 real 0m10.025s 00:09:28.075 user 0m10.254s 00:09:28.075 sys 0m5.025s 00:09:28.075 07:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:28.075 07:05:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:28.075 ************************************ 00:09:28.075 END TEST nvmf_bdevio 00:09:28.075 ************************************ 00:09:28.334 07:05:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:28.334 00:09:28.334 real 4m38.146s 00:09:28.334 user 10m27.927s 00:09:28.334 sys 1m37.789s 00:09:28.334 07:05:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:28.334 07:05:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:28.334 ************************************ 00:09:28.334 END TEST nvmf_target_core 00:09:28.334 ************************************ 00:09:28.335 07:05:32 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:28.335 07:05:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:28.335 07:05:32 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:28.335 07:05:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:28.335 ************************************ 00:09:28.335 START TEST nvmf_target_extra 00:09:28.335 ************************************ 00:09:28.335 07:05:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:28.335 * Looking for test storage... 00:09:28.335 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:28.335 07:05:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:28.335 07:05:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:09:28.335 07:05:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:28.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.595 --rc genhtml_branch_coverage=1 00:09:28.595 --rc genhtml_function_coverage=1 00:09:28.595 --rc genhtml_legend=1 00:09:28.595 --rc geninfo_all_blocks=1 00:09:28.595 --rc geninfo_unexecuted_blocks=1 00:09:28.595 00:09:28.595 ' 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:28.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.595 --rc genhtml_branch_coverage=1 00:09:28.595 --rc genhtml_function_coverage=1 00:09:28.595 --rc genhtml_legend=1 00:09:28.595 --rc geninfo_all_blocks=1 00:09:28.595 --rc geninfo_unexecuted_blocks=1 00:09:28.595 00:09:28.595 ' 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:28.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.595 --rc genhtml_branch_coverage=1 00:09:28.595 --rc genhtml_function_coverage=1 00:09:28.595 --rc genhtml_legend=1 00:09:28.595 --rc geninfo_all_blocks=1 00:09:28.595 --rc geninfo_unexecuted_blocks=1 00:09:28.595 00:09:28.595 ' 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:28.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.595 --rc genhtml_branch_coverage=1 00:09:28.595 --rc genhtml_function_coverage=1 00:09:28.595 --rc genhtml_legend=1 00:09:28.595 --rc geninfo_all_blocks=1 00:09:28.595 --rc geninfo_unexecuted_blocks=1 00:09:28.595 00:09:28.595 ' 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.595 07:05:32 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:28.596 07:05:32 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.596 07:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:28.596 07:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:28.596 07:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:28.596 07:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:28.596 07:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:28.596 07:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:28.596 07:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:28.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:28.596 07:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:28.596 07:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:28.596 07:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:28.596 07:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:28.596 07:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:28.596 07:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:28.596 07:05:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:28.596 07:05:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:28.596 07:05:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:28.596 07:05:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:28.596 ************************************ 00:09:28.596 START TEST nvmf_example 00:09:28.596 ************************************ 00:09:28.596 07:05:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:28.596 * Looking for test storage... 00:09:28.596 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:28.596 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:28.596 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:09:28.596 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:28.596 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:28.596 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:28.596 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:28.596 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:28.596 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:28.596 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:28.596 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:28.596 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:28.596 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:28.596 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:28.596 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:28.596 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:28.596 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:28.596 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:28.596 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:28.596 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:28.596 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:28.596 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:28.596 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:28.596 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:28.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.856 --rc genhtml_branch_coverage=1 00:09:28.856 --rc genhtml_function_coverage=1 00:09:28.856 --rc genhtml_legend=1 00:09:28.856 --rc geninfo_all_blocks=1 00:09:28.856 --rc geninfo_unexecuted_blocks=1 00:09:28.856 00:09:28.856 ' 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:28.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.856 --rc genhtml_branch_coverage=1 00:09:28.856 --rc genhtml_function_coverage=1 00:09:28.856 --rc genhtml_legend=1 00:09:28.856 --rc geninfo_all_blocks=1 00:09:28.856 --rc geninfo_unexecuted_blocks=1 00:09:28.856 00:09:28.856 ' 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:28.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.856 --rc genhtml_branch_coverage=1 00:09:28.856 --rc genhtml_function_coverage=1 00:09:28.856 --rc genhtml_legend=1 00:09:28.856 --rc geninfo_all_blocks=1 00:09:28.856 --rc geninfo_unexecuted_blocks=1 00:09:28.856 00:09:28.856 ' 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:28.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.856 --rc genhtml_branch_coverage=1 00:09:28.856 --rc genhtml_function_coverage=1 00:09:28.856 --rc genhtml_legend=1 00:09:28.856 --rc geninfo_all_blocks=1 00:09:28.856 --rc geninfo_unexecuted_blocks=1 00:09:28.856 00:09:28.856 ' 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:28.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:28.856 07:05:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:35.434 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:35.434 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:09:35.434 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:35.434 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:35.434 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:35.434 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:35.434 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:35.434 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:09:35.434 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:35.434 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:09:35.434 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:09:35.434 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:09:35.434 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:09:35.434 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:09:35.434 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:09:35.434 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:35.434 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:35.434 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:35.434 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:35.434 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:35.434 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:35.434 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:35.434 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:35.434 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:35.434 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:35.434 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:35.434 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:35.434 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:35.434 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:35.434 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:35.434 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:35.434 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:35.434 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:35.434 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:35.434 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:35.434 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:35.434 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:35.434 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:35.434 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:35.434 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:35.434 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:35.435 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:35.435 Found net devices under 0000:86:00.0: cvl_0_0 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:35.435 Found net devices under 0000:86:00.1: cvl_0_1 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:35.435 07:05:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:35.435 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:35.435 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:35.435 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:35.435 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:35.435 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:35.435 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:35.435 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:35.435 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:35.435 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:35.435 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.517 ms 00:09:35.435 00:09:35.435 --- 10.0.0.2 ping statistics --- 00:09:35.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.435 rtt min/avg/max/mdev = 0.517/0.517/0.517/0.000 ms 00:09:35.435 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:35.435 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:35.435 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:09:35.435 00:09:35.435 --- 10.0.0.1 ping statistics --- 00:09:35.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.435 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:09:35.435 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:35.435 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:09:35.435 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:35.435 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:35.435 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:35.435 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:35.435 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:35.435 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:35.435 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:35.435 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:35.435 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:35.435 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:35.435 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:35.435 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:35.435 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:35.436 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1088611 00:09:35.436 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:35.436 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:35.436 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1088611 00:09:35.436 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # '[' -z 1088611 ']' 00:09:35.436 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.436 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:35.436 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.436 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:35.436 07:05:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:35.695 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:35.695 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@866 -- # return 0 00:09:35.695 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:35.695 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:35.695 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:35.695 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:35.695 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.695 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:35.695 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.695 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:35.695 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.695 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:35.695 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.695 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:35.695 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:35.695 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.695 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:35.695 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.695 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:35.695 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:35.695 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.695 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:35.695 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.695 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:35.695 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.695 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:35.695 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.695 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:35.695 07:05:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:47.903 Initializing NVMe Controllers 00:09:47.903 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:47.903 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:47.903 Initialization complete. Launching workers. 00:09:47.903 ======================================================== 00:09:47.903 Latency(us) 00:09:47.903 Device Information : IOPS MiB/s Average min max 00:09:47.903 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17923.34 70.01 3571.45 510.92 15438.79 00:09:47.903 ======================================================== 00:09:47.903 Total : 17923.34 70.01 3571.45 510.92 15438.79 00:09:47.903 00:09:47.903 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:47.903 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:47.903 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:47.903 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:09:47.903 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:47.903 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:09:47.903 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:47.903 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:47.903 rmmod nvme_tcp 00:09:47.903 rmmod nvme_fabrics 00:09:47.903 rmmod nvme_keyring 00:09:47.903 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:47.903 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:09:47.904 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:09:47.904 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 1088611 ']' 00:09:47.904 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 1088611 00:09:47.904 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # '[' -z 1088611 ']' 00:09:47.904 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # kill -0 1088611 00:09:47.904 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # uname 00:09:47.904 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:47.904 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1088611 00:09:47.904 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # process_name=nvmf 00:09:47.904 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@962 -- # '[' nvmf = sudo ']' 00:09:47.904 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1088611' 00:09:47.904 killing process with pid 1088611 00:09:47.904 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@971 -- # kill 1088611 00:09:47.904 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@976 -- # wait 1088611 00:09:47.904 nvmf threads initialize successfully 00:09:47.904 bdev subsystem init successfully 00:09:47.904 created a nvmf target service 00:09:47.904 create targets's poll groups done 00:09:47.904 all subsystems of target started 00:09:47.904 nvmf target is running 00:09:47.904 all subsystems of target stopped 00:09:47.904 destroy targets's poll groups done 00:09:47.904 destroyed the nvmf target service 00:09:47.904 bdev subsystem finish successfully 00:09:47.904 nvmf threads destroy successfully 00:09:47.904 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:47.904 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:47.904 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:47.904 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:09:47.904 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:09:47.904 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:47.904 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:09:47.904 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:47.904 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:47.904 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.904 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.904 07:05:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:48.472 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:48.472 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:48.472 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:48.472 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:48.472 00:09:48.472 real 0m19.834s 00:09:48.472 user 0m45.983s 00:09:48.472 sys 0m6.168s 00:09:48.472 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:48.472 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:48.472 ************************************ 00:09:48.472 END TEST nvmf_example 00:09:48.472 ************************************ 00:09:48.472 07:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:48.472 07:05:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:48.472 07:05:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:48.472 07:05:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:48.472 ************************************ 00:09:48.472 START TEST nvmf_filesystem 00:09:48.472 ************************************ 00:09:48.472 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:48.472 * Looking for test storage... 00:09:48.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:48.472 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:48.472 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:09:48.472 07:05:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:48.734 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:48.734 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:48.734 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:48.734 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:48.734 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:48.734 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:48.734 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:48.734 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:48.734 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:48.734 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:48.734 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:48.734 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:48.734 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:48.734 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:48.734 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:48.734 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:48.734 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:48.734 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:48.734 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:48.734 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:48.734 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:48.734 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:48.734 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:48.734 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:48.734 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:48.734 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:48.734 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:48.734 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:48.734 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:48.734 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:48.734 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:48.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.734 --rc genhtml_branch_coverage=1 00:09:48.734 --rc genhtml_function_coverage=1 00:09:48.734 --rc genhtml_legend=1 00:09:48.734 --rc geninfo_all_blocks=1 00:09:48.734 --rc geninfo_unexecuted_blocks=1 00:09:48.734 00:09:48.734 ' 00:09:48.734 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:48.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.734 --rc genhtml_branch_coverage=1 00:09:48.734 --rc genhtml_function_coverage=1 00:09:48.734 --rc genhtml_legend=1 00:09:48.734 --rc geninfo_all_blocks=1 00:09:48.734 --rc geninfo_unexecuted_blocks=1 00:09:48.734 00:09:48.734 ' 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:48.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.735 --rc genhtml_branch_coverage=1 00:09:48.735 --rc genhtml_function_coverage=1 00:09:48.735 --rc genhtml_legend=1 00:09:48.735 --rc geninfo_all_blocks=1 00:09:48.735 --rc geninfo_unexecuted_blocks=1 00:09:48.735 00:09:48.735 ' 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:48.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.735 --rc genhtml_branch_coverage=1 00:09:48.735 --rc genhtml_function_coverage=1 00:09:48.735 --rc genhtml_legend=1 00:09:48.735 --rc geninfo_all_blocks=1 00:09:48.735 --rc geninfo_unexecuted_blocks=1 00:09:48.735 00:09:48.735 ' 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:09:48.735 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:09:48.736 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:09:48.736 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:09:48.736 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:09:48.736 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:09:48.736 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:09:48.736 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:09:48.736 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:09:48.736 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:09:48.736 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:09:48.736 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:09:48.736 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:09:48.736 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:09:48.736 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:09:48.736 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:48.736 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:09:48.736 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:09:48.736 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:09:48.736 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:09:48.736 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:09:48.736 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:09:48.736 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:09:48.736 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:09:48.736 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:09:48.736 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:09:48.736 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:09:48.736 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:48.736 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:09:48.736 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:09:48.736 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:09:48.736 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:48.736 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:48.736 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:48.736 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:48.736 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:48.736 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:48.736 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:48.736 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:48.736 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:48.736 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:48.736 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:48.736 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:48.736 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:48.736 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:48.736 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:48.736 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:48.736 #define SPDK_CONFIG_H 00:09:48.736 #define SPDK_CONFIG_AIO_FSDEV 1 00:09:48.736 #define SPDK_CONFIG_APPS 1 00:09:48.736 #define SPDK_CONFIG_ARCH native 00:09:48.736 #undef SPDK_CONFIG_ASAN 00:09:48.736 #undef SPDK_CONFIG_AVAHI 00:09:48.736 #undef SPDK_CONFIG_CET 00:09:48.736 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:09:48.736 #define SPDK_CONFIG_COVERAGE 1 00:09:48.736 #define SPDK_CONFIG_CROSS_PREFIX 00:09:48.736 #undef SPDK_CONFIG_CRYPTO 00:09:48.736 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:48.736 #undef SPDK_CONFIG_CUSTOMOCF 00:09:48.736 #undef SPDK_CONFIG_DAOS 00:09:48.736 #define SPDK_CONFIG_DAOS_DIR 00:09:48.736 #define SPDK_CONFIG_DEBUG 1 00:09:48.736 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:48.736 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:48.736 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:48.736 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:48.736 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:48.736 #undef SPDK_CONFIG_DPDK_UADK 00:09:48.736 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:48.736 #define SPDK_CONFIG_EXAMPLES 1 00:09:48.736 #undef SPDK_CONFIG_FC 00:09:48.736 #define SPDK_CONFIG_FC_PATH 00:09:48.736 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:48.736 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:48.736 #define SPDK_CONFIG_FSDEV 1 00:09:48.736 #undef SPDK_CONFIG_FUSE 00:09:48.736 #undef SPDK_CONFIG_FUZZER 00:09:48.736 #define SPDK_CONFIG_FUZZER_LIB 00:09:48.736 #undef SPDK_CONFIG_GOLANG 00:09:48.736 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:48.736 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:48.736 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:48.736 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:48.736 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:48.736 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:48.736 #undef SPDK_CONFIG_HAVE_LZ4 00:09:48.736 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:09:48.736 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:09:48.736 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:48.736 #define SPDK_CONFIG_IDXD 1 00:09:48.736 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:48.736 #undef SPDK_CONFIG_IPSEC_MB 00:09:48.736 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:48.736 #define SPDK_CONFIG_ISAL 1 00:09:48.736 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:48.736 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:48.736 #define SPDK_CONFIG_LIBDIR 00:09:48.736 #undef SPDK_CONFIG_LTO 00:09:48.736 #define SPDK_CONFIG_MAX_LCORES 128 00:09:48.736 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:09:48.737 #define SPDK_CONFIG_NVME_CUSE 1 00:09:48.737 #undef SPDK_CONFIG_OCF 00:09:48.737 #define SPDK_CONFIG_OCF_PATH 00:09:48.737 #define SPDK_CONFIG_OPENSSL_PATH 00:09:48.737 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:48.737 #define SPDK_CONFIG_PGO_DIR 00:09:48.737 #undef SPDK_CONFIG_PGO_USE 00:09:48.737 #define SPDK_CONFIG_PREFIX /usr/local 00:09:48.737 #undef SPDK_CONFIG_RAID5F 00:09:48.737 #undef SPDK_CONFIG_RBD 00:09:48.737 #define SPDK_CONFIG_RDMA 1 00:09:48.737 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:48.737 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:48.737 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:48.737 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:48.737 #define SPDK_CONFIG_SHARED 1 00:09:48.737 #undef SPDK_CONFIG_SMA 00:09:48.737 #define SPDK_CONFIG_TESTS 1 00:09:48.737 #undef SPDK_CONFIG_TSAN 00:09:48.737 #define SPDK_CONFIG_UBLK 1 00:09:48.737 #define SPDK_CONFIG_UBSAN 1 00:09:48.737 #undef SPDK_CONFIG_UNIT_TESTS 00:09:48.737 #undef SPDK_CONFIG_URING 00:09:48.737 #define SPDK_CONFIG_URING_PATH 00:09:48.737 #undef SPDK_CONFIG_URING_ZNS 00:09:48.737 #undef SPDK_CONFIG_USDT 00:09:48.737 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:48.737 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:48.737 #define SPDK_CONFIG_VFIO_USER 1 00:09:48.737 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:48.737 #define SPDK_CONFIG_VHOST 1 00:09:48.737 #define SPDK_CONFIG_VIRTIO 1 00:09:48.737 #undef SPDK_CONFIG_VTUNE 00:09:48.737 #define SPDK_CONFIG_VTUNE_DIR 00:09:48.737 #define SPDK_CONFIG_WERROR 1 00:09:48.737 #define SPDK_CONFIG_WPDK_DIR 00:09:48.737 #undef SPDK_CONFIG_XNVME 00:09:48.737 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:48.737 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:48.738 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:09:48.739 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j96 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 1090913 ]] 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 1090913 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.vDtZJb 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.vDtZJb/tests/target /tmp/spdk.vDtZJb 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=189196386304 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=195963961344 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=6767575040 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=97971949568 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=97981980672 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=39169748992 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=39192793088 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23044096 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:09:48.740 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=97981505536 00:09:48.741 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=97981980672 00:09:48.741 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=475136 00:09:48.741 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:48.741 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:09:48.741 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:09:48.741 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=19596382208 00:09:48.741 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=19596394496 00:09:48.741 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:09:48.741 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:48.741 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:09:48.741 * Looking for test storage... 00:09:48.741 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:09:48.741 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:09:48.741 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:48.741 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:48.741 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:09:48.741 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=189196386304 00:09:48.741 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:09:48.741 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:09:48.741 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:09:48.741 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:09:48.741 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:09:48.741 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=8982167552 00:09:48.741 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:48.741 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:48.741 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:48.741 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:48.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:48.741 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:09:48.741 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:09:48.741 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:09:48.741 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:48.741 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:48.741 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:09:48.741 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:09:48.741 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:48.741 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:48.741 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:48.741 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:48.741 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:48.741 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:48.741 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:48.741 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:48.741 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:48.741 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:09:48.741 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:49.000 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:49.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.001 --rc genhtml_branch_coverage=1 00:09:49.001 --rc genhtml_function_coverage=1 00:09:49.001 --rc genhtml_legend=1 00:09:49.001 --rc geninfo_all_blocks=1 00:09:49.001 --rc geninfo_unexecuted_blocks=1 00:09:49.001 00:09:49.001 ' 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:49.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.001 --rc genhtml_branch_coverage=1 00:09:49.001 --rc genhtml_function_coverage=1 00:09:49.001 --rc genhtml_legend=1 00:09:49.001 --rc geninfo_all_blocks=1 00:09:49.001 --rc geninfo_unexecuted_blocks=1 00:09:49.001 00:09:49.001 ' 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:49.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.001 --rc genhtml_branch_coverage=1 00:09:49.001 --rc genhtml_function_coverage=1 00:09:49.001 --rc genhtml_legend=1 00:09:49.001 --rc geninfo_all_blocks=1 00:09:49.001 --rc geninfo_unexecuted_blocks=1 00:09:49.001 00:09:49.001 ' 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:49.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:49.001 --rc genhtml_branch_coverage=1 00:09:49.001 --rc genhtml_function_coverage=1 00:09:49.001 --rc genhtml_legend=1 00:09:49.001 --rc geninfo_all_blocks=1 00:09:49.001 --rc geninfo_unexecuted_blocks=1 00:09:49.001 00:09:49.001 ' 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:49.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:49.001 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:49.002 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:49.002 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:49.002 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:49.002 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:49.002 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:49.002 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:49.002 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.002 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:49.002 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.002 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:49.002 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:49.002 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:49.002 07:05:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:55.578 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:55.578 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:55.578 Found net devices under 0000:86:00.0: cvl_0_0 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:55.578 Found net devices under 0000:86:00.1: cvl_0_1 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:09:55.578 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:55.579 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:55.579 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:09:55.579 00:09:55.579 --- 10.0.0.2 ping statistics --- 00:09:55.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.579 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:55.579 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:55.579 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:09:55.579 00:09:55.579 --- 10.0.0.1 ping statistics --- 00:09:55.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.579 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:55.579 ************************************ 00:09:55.579 START TEST nvmf_filesystem_no_in_capsule 00:09:55.579 ************************************ 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 0 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1094060 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1094060 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 1094060 ']' 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:55.579 07:05:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:55.579 [2024-11-20 07:05:59.453814] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:09:55.579 [2024-11-20 07:05:59.453856] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:55.579 [2024-11-20 07:05:59.533372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:55.579 [2024-11-20 07:05:59.576246] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:55.579 [2024-11-20 07:05:59.576283] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:55.579 [2024-11-20 07:05:59.576290] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:55.579 [2024-11-20 07:05:59.576297] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:55.579 [2024-11-20 07:05:59.576302] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:55.579 [2024-11-20 07:05:59.577890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:55.579 [2024-11-20 07:05:59.578003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.579 [2024-11-20 07:05:59.578004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:55.579 [2024-11-20 07:05:59.577911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:55.838 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:55.838 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:09:55.838 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:55.838 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:55.838 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:55.839 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:55.839 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:55.839 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:55.839 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.839 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:55.839 [2024-11-20 07:06:00.322993] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:55.839 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.839 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:55.839 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.839 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:56.098 Malloc1 00:09:56.098 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.098 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:56.098 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.098 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:56.098 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.098 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:56.098 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.098 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:56.098 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.098 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:56.098 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.098 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:56.098 [2024-11-20 07:06:00.492750] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:56.098 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.098 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:56.098 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:09:56.098 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:09:56.098 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:09:56.098 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:09:56.098 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:56.098 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.098 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:56.098 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.098 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:09:56.098 { 00:09:56.098 "name": "Malloc1", 00:09:56.098 "aliases": [ 00:09:56.098 "c8234cd8-7984-4e7d-bc1c-999c74131ebb" 00:09:56.098 ], 00:09:56.098 "product_name": "Malloc disk", 00:09:56.098 "block_size": 512, 00:09:56.098 "num_blocks": 1048576, 00:09:56.098 "uuid": "c8234cd8-7984-4e7d-bc1c-999c74131ebb", 00:09:56.098 "assigned_rate_limits": { 00:09:56.098 "rw_ios_per_sec": 0, 00:09:56.098 "rw_mbytes_per_sec": 0, 00:09:56.098 "r_mbytes_per_sec": 0, 00:09:56.098 "w_mbytes_per_sec": 0 00:09:56.098 }, 00:09:56.098 "claimed": true, 00:09:56.098 "claim_type": "exclusive_write", 00:09:56.098 "zoned": false, 00:09:56.098 "supported_io_types": { 00:09:56.098 "read": true, 00:09:56.098 "write": true, 00:09:56.098 "unmap": true, 00:09:56.098 "flush": true, 00:09:56.098 "reset": true, 00:09:56.098 "nvme_admin": false, 00:09:56.098 "nvme_io": false, 00:09:56.098 "nvme_io_md": false, 00:09:56.098 "write_zeroes": true, 00:09:56.098 "zcopy": true, 00:09:56.098 "get_zone_info": false, 00:09:56.098 "zone_management": false, 00:09:56.098 "zone_append": false, 00:09:56.098 "compare": false, 00:09:56.098 "compare_and_write": false, 00:09:56.098 "abort": true, 00:09:56.098 "seek_hole": false, 00:09:56.098 "seek_data": false, 00:09:56.098 "copy": true, 00:09:56.098 "nvme_iov_md": false 00:09:56.098 }, 00:09:56.098 "memory_domains": [ 00:09:56.098 { 00:09:56.098 "dma_device_id": "system", 00:09:56.098 "dma_device_type": 1 00:09:56.098 }, 00:09:56.098 { 00:09:56.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.098 "dma_device_type": 2 00:09:56.098 } 00:09:56.098 ], 00:09:56.098 "driver_specific": {} 00:09:56.098 } 00:09:56.098 ]' 00:09:56.098 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:09:56.098 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:09:56.098 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:09:56.098 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:09:56.098 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:09:56.098 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:09:56.099 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:56.099 07:06:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:57.475 07:06:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:57.475 07:06:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:09:57.475 07:06:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:57.475 07:06:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:09:57.475 07:06:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:09:59.380 07:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:59.380 07:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:59.380 07:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:59.380 07:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:09:59.380 07:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:59.380 07:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:09:59.380 07:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:59.380 07:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:59.380 07:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:59.380 07:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:59.380 07:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:59.380 07:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:59.380 07:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:59.380 07:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:59.380 07:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:59.380 07:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:59.380 07:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:59.380 07:06:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:59.640 07:06:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:00.577 07:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:00.577 07:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:00.577 07:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:00.577 07:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:00.577 07:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:00.836 ************************************ 00:10:00.836 START TEST filesystem_ext4 00:10:00.836 ************************************ 00:10:00.836 07:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:00.836 07:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:00.836 07:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:00.836 07:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:00.836 07:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:10:00.837 07:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:00.837 07:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:10:00.837 07:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local force 00:10:00.837 07:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:10:00.837 07:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:10:00.837 07:06:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:00.837 mke2fs 1.47.0 (5-Feb-2023) 00:10:00.837 Discarding device blocks: 0/522240 done 00:10:00.837 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:00.837 Filesystem UUID: fe3002ee-22fd-4091-8717-3d4b02c4309a 00:10:00.837 Superblock backups stored on blocks: 00:10:00.837 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:00.837 00:10:00.837 Allocating group tables: 0/64 done 00:10:00.837 Writing inode tables: 0/64 done 00:10:01.096 Creating journal (8192 blocks): done 00:10:03.300 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:10:03.300 00:10:03.300 07:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@947 -- # return 0 00:10:03.300 07:06:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:09.866 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:09.866 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:09.866 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:09.866 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:09.866 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:09.866 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:09.866 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1094060 00:10:09.866 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:09.866 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:09.866 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:09.866 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:09.866 00:10:09.866 real 0m8.321s 00:10:09.866 user 0m0.029s 00:10:09.866 sys 0m0.072s 00:10:09.866 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:09.866 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:09.866 ************************************ 00:10:09.866 END TEST filesystem_ext4 00:10:09.866 ************************************ 00:10:09.866 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:09.866 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:09.866 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:09.866 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:09.866 ************************************ 00:10:09.866 START TEST filesystem_btrfs 00:10:09.866 ************************************ 00:10:09.866 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:09.866 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:09.867 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:09.867 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:09.867 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:10:09.867 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:09.867 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:10:09.867 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local force 00:10:09.867 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:10:09.867 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:10:09.867 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:09.867 btrfs-progs v6.8.1 00:10:09.867 See https://btrfs.readthedocs.io for more information. 00:10:09.867 00:10:09.867 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:09.867 NOTE: several default settings have changed in version 5.15, please make sure 00:10:09.867 this does not affect your deployments: 00:10:09.867 - DUP for metadata (-m dup) 00:10:09.867 - enabled no-holes (-O no-holes) 00:10:09.867 - enabled free-space-tree (-R free-space-tree) 00:10:09.867 00:10:09.867 Label: (null) 00:10:09.867 UUID: 8c761b0f-4eb1-4752-af01-7058585c81a8 00:10:09.867 Node size: 16384 00:10:09.867 Sector size: 4096 (CPU page size: 4096) 00:10:09.867 Filesystem size: 510.00MiB 00:10:09.867 Block group profiles: 00:10:09.867 Data: single 8.00MiB 00:10:09.867 Metadata: DUP 32.00MiB 00:10:09.867 System: DUP 8.00MiB 00:10:09.867 SSD detected: yes 00:10:09.867 Zoned device: no 00:10:09.867 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:09.867 Checksum: crc32c 00:10:09.867 Number of devices: 1 00:10:09.867 Devices: 00:10:09.867 ID SIZE PATH 00:10:09.867 1 510.00MiB /dev/nvme0n1p1 00:10:09.867 00:10:09.867 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@947 -- # return 0 00:10:09.867 07:06:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:10.126 07:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:10.126 07:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:10.126 07:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:10.126 07:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:10.126 07:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:10.126 07:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:10.126 07:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1094060 00:10:10.126 07:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:10.126 07:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:10.385 07:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:10.385 07:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:10.385 00:10:10.385 real 0m1.164s 00:10:10.385 user 0m0.028s 00:10:10.385 sys 0m0.114s 00:10:10.385 07:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:10.385 07:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:10.385 ************************************ 00:10:10.385 END TEST filesystem_btrfs 00:10:10.385 ************************************ 00:10:10.385 07:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:10.385 07:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:10.385 07:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:10.385 07:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:10.385 ************************************ 00:10:10.385 START TEST filesystem_xfs 00:10:10.385 ************************************ 00:10:10.385 07:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:10:10.385 07:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:10.385 07:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:10.385 07:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:10.385 07:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:10:10.385 07:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:10.385 07:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local i=0 00:10:10.385 07:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local force 00:10:10.386 07:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:10:10.386 07:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # force=-f 00:10:10.386 07:06:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:10.953 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:10.953 = sectsz=512 attr=2, projid32bit=1 00:10:10.953 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:10.953 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:10.953 data = bsize=4096 blocks=130560, imaxpct=25 00:10:10.953 = sunit=0 swidth=0 blks 00:10:10.953 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:10.953 log =internal log bsize=4096 blocks=16384, version=2 00:10:10.953 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:10.953 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:12.328 Discarding blocks...Done. 00:10:12.328 07:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@947 -- # return 0 00:10:12.328 07:06:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:14.259 07:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:14.259 07:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:14.259 07:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:14.259 07:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:14.259 07:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:14.259 07:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:14.259 07:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1094060 00:10:14.259 07:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:14.259 07:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:14.259 07:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:14.259 07:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:14.259 00:10:14.259 real 0m4.027s 00:10:14.259 user 0m0.016s 00:10:14.259 sys 0m0.083s 00:10:14.259 07:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:14.259 07:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:14.259 ************************************ 00:10:14.259 END TEST filesystem_xfs 00:10:14.260 ************************************ 00:10:14.569 07:06:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:14.569 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:14.569 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:14.911 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.911 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:14.911 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:10:14.911 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:14.911 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:14.911 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:14.911 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:14.911 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:10:14.911 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:14.911 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.911 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:14.911 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.911 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:14.911 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1094060 00:10:14.911 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 1094060 ']' 00:10:14.911 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # kill -0 1094060 00:10:14.911 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # uname 00:10:14.911 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:14.911 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1094060 00:10:14.911 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:14.911 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:14.911 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1094060' 00:10:14.911 killing process with pid 1094060 00:10:14.911 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # kill 1094060 00:10:14.911 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@976 -- # wait 1094060 00:10:15.169 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:15.169 00:10:15.169 real 0m20.236s 00:10:15.169 user 1m19.843s 00:10:15.169 sys 0m1.480s 00:10:15.169 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:15.169 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:15.169 ************************************ 00:10:15.169 END TEST nvmf_filesystem_no_in_capsule 00:10:15.169 ************************************ 00:10:15.170 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:15.170 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:15.170 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:15.170 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:15.170 ************************************ 00:10:15.170 START TEST nvmf_filesystem_in_capsule 00:10:15.170 ************************************ 00:10:15.170 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 4096 00:10:15.170 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:15.170 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:15.170 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:15.170 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:15.170 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:15.170 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1098160 00:10:15.170 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1098160 00:10:15.170 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:15.170 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 1098160 ']' 00:10:15.170 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.170 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:15.170 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.170 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:15.170 07:06:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:15.428 [2024-11-20 07:06:19.768581] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:10:15.428 [2024-11-20 07:06:19.768625] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:15.428 [2024-11-20 07:06:19.849932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:15.428 [2024-11-20 07:06:19.892630] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:15.428 [2024-11-20 07:06:19.892665] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:15.428 [2024-11-20 07:06:19.892672] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:15.428 [2024-11-20 07:06:19.892677] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:15.428 [2024-11-20 07:06:19.892682] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:15.428 [2024-11-20 07:06:19.894292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.428 [2024-11-20 07:06:19.894398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:15.428 [2024-11-20 07:06:19.894506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.428 [2024-11-20 07:06:19.894507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:16.364 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:16.364 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:10:16.364 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:16.364 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:16.364 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.364 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:16.364 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:16.364 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:16.364 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.364 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.364 [2024-11-20 07:06:20.646255] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:16.364 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.364 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:16.364 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.364 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.364 Malloc1 00:10:16.364 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.364 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:16.364 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.364 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.364 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.364 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:16.364 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.364 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.364 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.364 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:16.364 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.364 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.364 [2024-11-20 07:06:20.810126] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:16.364 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.364 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:16.364 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:10:16.364 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:10:16.364 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:10:16.364 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:10:16.364 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:16.364 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.364 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.364 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.364 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:10:16.364 { 00:10:16.364 "name": "Malloc1", 00:10:16.364 "aliases": [ 00:10:16.364 "b4e46dd0-2efb-479a-b1ee-cd4ee330500a" 00:10:16.364 ], 00:10:16.364 "product_name": "Malloc disk", 00:10:16.364 "block_size": 512, 00:10:16.364 "num_blocks": 1048576, 00:10:16.364 "uuid": "b4e46dd0-2efb-479a-b1ee-cd4ee330500a", 00:10:16.364 "assigned_rate_limits": { 00:10:16.364 "rw_ios_per_sec": 0, 00:10:16.364 "rw_mbytes_per_sec": 0, 00:10:16.364 "r_mbytes_per_sec": 0, 00:10:16.364 "w_mbytes_per_sec": 0 00:10:16.364 }, 00:10:16.364 "claimed": true, 00:10:16.364 "claim_type": "exclusive_write", 00:10:16.364 "zoned": false, 00:10:16.364 "supported_io_types": { 00:10:16.364 "read": true, 00:10:16.364 "write": true, 00:10:16.364 "unmap": true, 00:10:16.364 "flush": true, 00:10:16.364 "reset": true, 00:10:16.364 "nvme_admin": false, 00:10:16.364 "nvme_io": false, 00:10:16.364 "nvme_io_md": false, 00:10:16.364 "write_zeroes": true, 00:10:16.364 "zcopy": true, 00:10:16.364 "get_zone_info": false, 00:10:16.364 "zone_management": false, 00:10:16.364 "zone_append": false, 00:10:16.364 "compare": false, 00:10:16.364 "compare_and_write": false, 00:10:16.364 "abort": true, 00:10:16.364 "seek_hole": false, 00:10:16.364 "seek_data": false, 00:10:16.364 "copy": true, 00:10:16.364 "nvme_iov_md": false 00:10:16.364 }, 00:10:16.364 "memory_domains": [ 00:10:16.364 { 00:10:16.364 "dma_device_id": "system", 00:10:16.364 "dma_device_type": 1 00:10:16.364 }, 00:10:16.364 { 00:10:16.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.364 "dma_device_type": 2 00:10:16.364 } 00:10:16.364 ], 00:10:16.364 "driver_specific": {} 00:10:16.364 } 00:10:16.364 ]' 00:10:16.364 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:10:16.364 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:10:16.364 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:10:16.364 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:10:16.364 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:10:16.364 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:10:16.364 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:16.364 07:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:17.741 07:06:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:17.741 07:06:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:10:17.741 07:06:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:17.741 07:06:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:17.741 07:06:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:10:19.643 07:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:19.643 07:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:19.643 07:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:19.643 07:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:19.643 07:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:19.643 07:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:10:19.643 07:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:19.643 07:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:19.643 07:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:19.643 07:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:19.643 07:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:19.643 07:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:19.643 07:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:19.643 07:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:19.643 07:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:19.643 07:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:19.643 07:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:19.901 07:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:20.466 07:06:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:21.398 07:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:21.398 07:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:21.398 07:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:21.398 07:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:21.398 07:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:21.399 ************************************ 00:10:21.399 START TEST filesystem_in_capsule_ext4 00:10:21.399 ************************************ 00:10:21.399 07:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:21.399 07:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:21.399 07:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:21.399 07:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:21.399 07:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:10:21.399 07:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:21.399 07:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:10:21.399 07:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local force 00:10:21.399 07:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:10:21.399 07:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:10:21.399 07:06:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:21.399 mke2fs 1.47.0 (5-Feb-2023) 00:10:21.656 Discarding device blocks: 0/522240 done 00:10:21.656 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:21.656 Filesystem UUID: 50d08069-abe7-4efe-b793-6c77dfe27fe9 00:10:21.656 Superblock backups stored on blocks: 00:10:21.656 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:21.656 00:10:21.656 Allocating group tables: 0/64 done 00:10:21.656 Writing inode tables: 0/64 done 00:10:22.219 Creating journal (8192 blocks): done 00:10:24.235 Writing superblocks and filesystem accounting information: 0/64 4/64 done 00:10:24.235 00:10:24.235 07:06:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@947 -- # return 0 00:10:24.235 07:06:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:30.798 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:30.798 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:30.798 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:30.798 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:30.798 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:30.798 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:30.798 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1098160 00:10:30.798 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:30.798 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:30.798 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:30.798 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:30.798 00:10:30.798 real 0m8.400s 00:10:30.798 user 0m0.035s 00:10:30.798 sys 0m0.064s 00:10:30.798 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:30.798 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:30.798 ************************************ 00:10:30.798 END TEST filesystem_in_capsule_ext4 00:10:30.798 ************************************ 00:10:30.798 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:30.798 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:30.798 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:30.798 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.798 ************************************ 00:10:30.798 START TEST filesystem_in_capsule_btrfs 00:10:30.798 ************************************ 00:10:30.798 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:30.798 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:30.798 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:30.798 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:30.798 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:10:30.798 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:30.798 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:10:30.798 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local force 00:10:30.798 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:10:30.798 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:10:30.798 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:30.798 btrfs-progs v6.8.1 00:10:30.798 See https://btrfs.readthedocs.io for more information. 00:10:30.798 00:10:30.798 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:30.798 NOTE: several default settings have changed in version 5.15, please make sure 00:10:30.798 this does not affect your deployments: 00:10:30.798 - DUP for metadata (-m dup) 00:10:30.798 - enabled no-holes (-O no-holes) 00:10:30.798 - enabled free-space-tree (-R free-space-tree) 00:10:30.798 00:10:30.798 Label: (null) 00:10:30.798 UUID: b57c5c0e-b032-4bee-9c3a-39c9864ab807 00:10:30.798 Node size: 16384 00:10:30.798 Sector size: 4096 (CPU page size: 4096) 00:10:30.798 Filesystem size: 510.00MiB 00:10:30.798 Block group profiles: 00:10:30.798 Data: single 8.00MiB 00:10:30.798 Metadata: DUP 32.00MiB 00:10:30.798 System: DUP 8.00MiB 00:10:30.798 SSD detected: yes 00:10:30.798 Zoned device: no 00:10:30.798 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:30.798 Checksum: crc32c 00:10:30.798 Number of devices: 1 00:10:30.798 Devices: 00:10:30.798 ID SIZE PATH 00:10:30.798 1 510.00MiB /dev/nvme0n1p1 00:10:30.798 00:10:30.798 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@947 -- # return 0 00:10:30.798 07:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:31.056 07:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:31.056 07:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:31.056 07:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:31.056 07:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:31.056 07:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:31.056 07:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:31.056 07:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1098160 00:10:31.056 07:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:31.056 07:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:31.056 07:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:31.056 07:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:31.056 00:10:31.056 real 0m1.191s 00:10:31.056 user 0m0.025s 00:10:31.056 sys 0m0.115s 00:10:31.056 07:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:31.056 07:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:31.056 ************************************ 00:10:31.056 END TEST filesystem_in_capsule_btrfs 00:10:31.056 ************************************ 00:10:31.056 07:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:31.056 07:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:31.057 07:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:31.057 07:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:31.316 ************************************ 00:10:31.316 START TEST filesystem_in_capsule_xfs 00:10:31.316 ************************************ 00:10:31.316 07:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:10:31.316 07:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:31.316 07:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:31.316 07:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:31.316 07:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:10:31.316 07:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:10:31.316 07:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local i=0 00:10:31.316 07:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local force 00:10:31.316 07:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:10:31.316 07:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # force=-f 00:10:31.316 07:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:31.316 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:31.316 = sectsz=512 attr=2, projid32bit=1 00:10:31.316 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:31.316 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:31.316 data = bsize=4096 blocks=130560, imaxpct=25 00:10:31.316 = sunit=0 swidth=0 blks 00:10:31.316 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:31.316 log =internal log bsize=4096 blocks=16384, version=2 00:10:31.316 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:31.316 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:32.251 Discarding blocks...Done. 00:10:32.251 07:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@947 -- # return 0 00:10:32.251 07:06:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:34.153 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:34.153 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:34.153 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:34.153 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:34.153 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:34.153 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:34.153 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1098160 00:10:34.153 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:34.153 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:34.153 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:34.153 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:34.153 00:10:34.153 real 0m2.926s 00:10:34.153 user 0m0.026s 00:10:34.153 sys 0m0.071s 00:10:34.153 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:34.153 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:34.153 ************************************ 00:10:34.153 END TEST filesystem_in_capsule_xfs 00:10:34.153 ************************************ 00:10:34.153 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:34.153 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:34.153 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:34.412 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.412 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:34.412 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:10:34.412 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:34.412 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:34.412 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:34.412 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:34.412 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:10:34.412 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:34.412 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.412 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:34.412 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.412 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:34.412 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1098160 00:10:34.412 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 1098160 ']' 00:10:34.412 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # kill -0 1098160 00:10:34.412 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # uname 00:10:34.412 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:34.412 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1098160 00:10:34.412 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:34.412 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:34.412 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1098160' 00:10:34.412 killing process with pid 1098160 00:10:34.412 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # kill 1098160 00:10:34.412 07:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@976 -- # wait 1098160 00:10:34.671 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:34.671 00:10:34.671 real 0m19.461s 00:10:34.671 user 1m16.713s 00:10:34.671 sys 0m1.526s 00:10:34.671 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:34.671 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:34.671 ************************************ 00:10:34.671 END TEST nvmf_filesystem_in_capsule 00:10:34.671 ************************************ 00:10:34.671 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:34.671 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:34.671 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:34.671 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:34.671 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:34.671 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:34.671 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:34.671 rmmod nvme_tcp 00:10:34.930 rmmod nvme_fabrics 00:10:34.930 rmmod nvme_keyring 00:10:34.930 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:34.930 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:34.930 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:34.930 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:34.930 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:34.930 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:34.930 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:34.930 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:10:34.930 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:10:34.930 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:34.930 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:34.930 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:34.930 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:34.930 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.930 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:34.930 07:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.834 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:36.835 00:10:36.835 real 0m48.488s 00:10:36.835 user 2m38.639s 00:10:36.835 sys 0m7.707s 00:10:36.835 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:36.835 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:36.835 ************************************ 00:10:36.835 END TEST nvmf_filesystem 00:10:36.835 ************************************ 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:37.094 ************************************ 00:10:37.094 START TEST nvmf_target_discovery 00:10:37.094 ************************************ 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:37.094 * Looking for test storage... 00:10:37.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:37.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.094 --rc genhtml_branch_coverage=1 00:10:37.094 --rc genhtml_function_coverage=1 00:10:37.094 --rc genhtml_legend=1 00:10:37.094 --rc geninfo_all_blocks=1 00:10:37.094 --rc geninfo_unexecuted_blocks=1 00:10:37.094 00:10:37.094 ' 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:37.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.094 --rc genhtml_branch_coverage=1 00:10:37.094 --rc genhtml_function_coverage=1 00:10:37.094 --rc genhtml_legend=1 00:10:37.094 --rc geninfo_all_blocks=1 00:10:37.094 --rc geninfo_unexecuted_blocks=1 00:10:37.094 00:10:37.094 ' 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:37.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.094 --rc genhtml_branch_coverage=1 00:10:37.094 --rc genhtml_function_coverage=1 00:10:37.094 --rc genhtml_legend=1 00:10:37.094 --rc geninfo_all_blocks=1 00:10:37.094 --rc geninfo_unexecuted_blocks=1 00:10:37.094 00:10:37.094 ' 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:37.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.094 --rc genhtml_branch_coverage=1 00:10:37.094 --rc genhtml_function_coverage=1 00:10:37.094 --rc genhtml_legend=1 00:10:37.094 --rc geninfo_all_blocks=1 00:10:37.094 --rc geninfo_unexecuted_blocks=1 00:10:37.094 00:10:37.094 ' 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.094 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.095 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.095 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:37.095 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:37.095 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.095 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:37.095 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.095 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:37.354 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:37.354 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:37.354 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.354 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:37.354 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:37.354 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.354 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:37.354 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:37.354 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.354 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.354 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.354 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.354 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.354 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.354 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:37.354 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.354 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:37.354 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:37.354 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:37.354 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:37.354 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.354 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.354 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:37.354 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:37.354 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:37.354 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:37.354 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:37.354 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:37.354 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:37.354 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:37.354 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:37.354 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:37.354 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:37.354 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:37.354 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:37.354 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:37.354 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:37.354 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.354 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.354 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.354 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:37.354 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:37.354 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:10:37.354 07:06:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:43.926 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:43.926 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:43.926 Found net devices under 0000:86:00.0: cvl_0_0 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:43.926 Found net devices under 0000:86:00.1: cvl_0_1 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:43.926 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:43.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:43.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:10:43.927 00:10:43.927 --- 10.0.0.2 ping statistics --- 00:10:43.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.927 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:43.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:43.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:10:43.927 00:10:43.927 --- 10.0.0.1 ping statistics --- 00:10:43.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.927 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=1105010 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 1105010 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # '[' -z 1105010 ']' 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.927 [2024-11-20 07:06:47.724985] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:10:43.927 [2024-11-20 07:06:47.725029] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:43.927 [2024-11-20 07:06:47.801275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:43.927 [2024-11-20 07:06:47.844543] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:43.927 [2024-11-20 07:06:47.844581] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:43.927 [2024-11-20 07:06:47.844588] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:43.927 [2024-11-20 07:06:47.844594] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:43.927 [2024-11-20 07:06:47.844599] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:43.927 [2024-11-20 07:06:47.846101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:43.927 [2024-11-20 07:06:47.846209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:43.927 [2024-11-20 07:06:47.846312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.927 [2024-11-20 07:06:47.846313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@866 -- # return 0 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.927 [2024-11-20 07:06:47.993245] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.927 07:06:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:43.927 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:43.927 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:43.927 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.927 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.927 Null1 00:10:43.927 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.927 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:43.927 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.927 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.927 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.927 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:43.927 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.927 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.927 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.927 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:43.927 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.927 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.927 [2024-11-20 07:06:48.054112] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:43.927 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.927 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:43.927 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:43.927 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.928 Null2 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.928 Null3 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.928 Null4 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:10:43.928 00:10:43.928 Discovery Log Number of Records 6, Generation counter 6 00:10:43.928 =====Discovery Log Entry 0====== 00:10:43.928 trtype: tcp 00:10:43.928 adrfam: ipv4 00:10:43.928 subtype: current discovery subsystem 00:10:43.928 treq: not required 00:10:43.928 portid: 0 00:10:43.928 trsvcid: 4420 00:10:43.928 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:43.928 traddr: 10.0.0.2 00:10:43.928 eflags: explicit discovery connections, duplicate discovery information 00:10:43.928 sectype: none 00:10:43.928 =====Discovery Log Entry 1====== 00:10:43.928 trtype: tcp 00:10:43.928 adrfam: ipv4 00:10:43.928 subtype: nvme subsystem 00:10:43.928 treq: not required 00:10:43.928 portid: 0 00:10:43.928 trsvcid: 4420 00:10:43.928 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:43.928 traddr: 10.0.0.2 00:10:43.928 eflags: none 00:10:43.928 sectype: none 00:10:43.928 =====Discovery Log Entry 2====== 00:10:43.928 trtype: tcp 00:10:43.928 adrfam: ipv4 00:10:43.928 subtype: nvme subsystem 00:10:43.928 treq: not required 00:10:43.928 portid: 0 00:10:43.928 trsvcid: 4420 00:10:43.928 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:43.928 traddr: 10.0.0.2 00:10:43.928 eflags: none 00:10:43.928 sectype: none 00:10:43.928 =====Discovery Log Entry 3====== 00:10:43.928 trtype: tcp 00:10:43.928 adrfam: ipv4 00:10:43.928 subtype: nvme subsystem 00:10:43.928 treq: not required 00:10:43.928 portid: 0 00:10:43.928 trsvcid: 4420 00:10:43.928 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:43.928 traddr: 10.0.0.2 00:10:43.928 eflags: none 00:10:43.928 sectype: none 00:10:43.928 =====Discovery Log Entry 4====== 00:10:43.928 trtype: tcp 00:10:43.928 adrfam: ipv4 00:10:43.928 subtype: nvme subsystem 00:10:43.928 treq: not required 00:10:43.928 portid: 0 00:10:43.928 trsvcid: 4420 00:10:43.928 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:43.928 traddr: 10.0.0.2 00:10:43.928 eflags: none 00:10:43.928 sectype: none 00:10:43.928 =====Discovery Log Entry 5====== 00:10:43.928 trtype: tcp 00:10:43.928 adrfam: ipv4 00:10:43.928 subtype: discovery subsystem referral 00:10:43.928 treq: not required 00:10:43.928 portid: 0 00:10:43.928 trsvcid: 4430 00:10:43.928 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:43.928 traddr: 10.0.0.2 00:10:43.928 eflags: none 00:10:43.928 sectype: none 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:43.928 Perform nvmf subsystem discovery via RPC 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.928 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.928 [ 00:10:43.928 { 00:10:43.928 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:43.928 "subtype": "Discovery", 00:10:43.928 "listen_addresses": [ 00:10:43.928 { 00:10:43.928 "trtype": "TCP", 00:10:43.928 "adrfam": "IPv4", 00:10:43.928 "traddr": "10.0.0.2", 00:10:43.928 "trsvcid": "4420" 00:10:43.928 } 00:10:43.928 ], 00:10:43.928 "allow_any_host": true, 00:10:43.928 "hosts": [] 00:10:43.928 }, 00:10:43.928 { 00:10:43.928 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:43.928 "subtype": "NVMe", 00:10:43.928 "listen_addresses": [ 00:10:43.928 { 00:10:43.928 "trtype": "TCP", 00:10:43.928 "adrfam": "IPv4", 00:10:43.928 "traddr": "10.0.0.2", 00:10:43.928 "trsvcid": "4420" 00:10:43.928 } 00:10:43.928 ], 00:10:43.928 "allow_any_host": true, 00:10:43.928 "hosts": [], 00:10:43.928 "serial_number": "SPDK00000000000001", 00:10:43.928 "model_number": "SPDK bdev Controller", 00:10:43.928 "max_namespaces": 32, 00:10:43.928 "min_cntlid": 1, 00:10:43.929 "max_cntlid": 65519, 00:10:43.929 "namespaces": [ 00:10:43.929 { 00:10:43.929 "nsid": 1, 00:10:43.929 "bdev_name": "Null1", 00:10:43.929 "name": "Null1", 00:10:43.929 "nguid": "41FF67416FB04296B6955D04CE87C42D", 00:10:43.929 "uuid": "41ff6741-6fb0-4296-b695-5d04ce87c42d" 00:10:43.929 } 00:10:43.929 ] 00:10:43.929 }, 00:10:43.929 { 00:10:43.929 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:43.929 "subtype": "NVMe", 00:10:43.929 "listen_addresses": [ 00:10:43.929 { 00:10:43.929 "trtype": "TCP", 00:10:43.929 "adrfam": "IPv4", 00:10:43.929 "traddr": "10.0.0.2", 00:10:43.929 "trsvcid": "4420" 00:10:43.929 } 00:10:43.929 ], 00:10:43.929 "allow_any_host": true, 00:10:43.929 "hosts": [], 00:10:43.929 "serial_number": "SPDK00000000000002", 00:10:43.929 "model_number": "SPDK bdev Controller", 00:10:43.929 "max_namespaces": 32, 00:10:43.929 "min_cntlid": 1, 00:10:43.929 "max_cntlid": 65519, 00:10:43.929 "namespaces": [ 00:10:43.929 { 00:10:43.929 "nsid": 1, 00:10:43.929 "bdev_name": "Null2", 00:10:43.929 "name": "Null2", 00:10:43.929 "nguid": "4B7B2F73C2FC4EFFA1882FE0371C9E97", 00:10:43.929 "uuid": "4b7b2f73-c2fc-4eff-a188-2fe0371c9e97" 00:10:43.929 } 00:10:43.929 ] 00:10:43.929 }, 00:10:43.929 { 00:10:43.929 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:43.929 "subtype": "NVMe", 00:10:43.929 "listen_addresses": [ 00:10:43.929 { 00:10:43.929 "trtype": "TCP", 00:10:43.929 "adrfam": "IPv4", 00:10:43.929 "traddr": "10.0.0.2", 00:10:43.929 "trsvcid": "4420" 00:10:43.929 } 00:10:43.929 ], 00:10:43.929 "allow_any_host": true, 00:10:43.929 "hosts": [], 00:10:43.929 "serial_number": "SPDK00000000000003", 00:10:43.929 "model_number": "SPDK bdev Controller", 00:10:43.929 "max_namespaces": 32, 00:10:43.929 "min_cntlid": 1, 00:10:43.929 "max_cntlid": 65519, 00:10:43.929 "namespaces": [ 00:10:43.929 { 00:10:43.929 "nsid": 1, 00:10:43.929 "bdev_name": "Null3", 00:10:43.929 "name": "Null3", 00:10:43.929 "nguid": "AFF9653483A24308B27C3B3A9D9946DB", 00:10:43.929 "uuid": "aff96534-83a2-4308-b27c-3b3a9d9946db" 00:10:43.929 } 00:10:43.929 ] 00:10:43.929 }, 00:10:43.929 { 00:10:43.929 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:43.929 "subtype": "NVMe", 00:10:43.929 "listen_addresses": [ 00:10:43.929 { 00:10:43.929 "trtype": "TCP", 00:10:43.929 "adrfam": "IPv4", 00:10:43.929 "traddr": "10.0.0.2", 00:10:43.929 "trsvcid": "4420" 00:10:43.929 } 00:10:43.929 ], 00:10:43.929 "allow_any_host": true, 00:10:43.929 "hosts": [], 00:10:43.929 "serial_number": "SPDK00000000000004", 00:10:43.929 "model_number": "SPDK bdev Controller", 00:10:43.929 "max_namespaces": 32, 00:10:43.929 "min_cntlid": 1, 00:10:43.929 "max_cntlid": 65519, 00:10:43.929 "namespaces": [ 00:10:43.929 { 00:10:43.929 "nsid": 1, 00:10:43.929 "bdev_name": "Null4", 00:10:43.929 "name": "Null4", 00:10:43.929 "nguid": "FB6597E9D5474044ADDAE51B29929291", 00:10:43.929 "uuid": "fb6597e9-d547-4044-adda-e51b29929291" 00:10:43.929 } 00:10:43.929 ] 00:10:43.929 } 00:10:43.929 ] 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:43.929 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.189 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:44.189 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:44.189 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:44.189 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:44.189 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:44.189 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:10:44.189 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:44.189 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:10:44.189 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:44.189 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:44.189 rmmod nvme_tcp 00:10:44.189 rmmod nvme_fabrics 00:10:44.189 rmmod nvme_keyring 00:10:44.189 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:44.189 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:10:44.189 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:10:44.189 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 1105010 ']' 00:10:44.189 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 1105010 00:10:44.189 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' -z 1105010 ']' 00:10:44.189 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # kill -0 1105010 00:10:44.189 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # uname 00:10:44.189 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:44.189 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1105010 00:10:44.189 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:44.189 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:44.189 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1105010' 00:10:44.189 killing process with pid 1105010 00:10:44.189 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@971 -- # kill 1105010 00:10:44.189 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@976 -- # wait 1105010 00:10:44.449 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:44.449 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:44.449 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:44.449 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:10:44.449 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:10:44.449 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:44.449 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:10:44.449 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:44.449 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:44.449 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.449 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.449 07:06:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.355 07:06:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:46.355 00:10:46.355 real 0m9.417s 00:10:46.355 user 0m5.736s 00:10:46.355 sys 0m4.814s 00:10:46.355 07:06:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:46.355 07:06:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:46.355 ************************************ 00:10:46.355 END TEST nvmf_target_discovery 00:10:46.355 ************************************ 00:10:46.615 07:06:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:46.615 07:06:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:46.615 07:06:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:46.615 07:06:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:46.615 ************************************ 00:10:46.615 START TEST nvmf_referrals 00:10:46.615 ************************************ 00:10:46.615 07:06:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:46.615 * Looking for test storage... 00:10:46.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:46.615 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:46.615 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:10:46.615 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:46.615 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:46.615 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:46.615 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:46.615 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:46.615 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:10:46.615 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:10:46.615 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:10:46.615 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:10:46.615 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:10:46.615 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:10:46.615 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:10:46.615 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:46.615 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:10:46.615 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:10:46.615 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:46.615 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:46.615 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:10:46.615 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:10:46.615 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:46.615 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:10:46.615 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:10:46.615 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:10:46.615 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:10:46.615 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:46.615 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:10:46.615 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:10:46.615 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:46.615 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:46.615 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:10:46.615 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:46.615 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:46.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.615 --rc genhtml_branch_coverage=1 00:10:46.615 --rc genhtml_function_coverage=1 00:10:46.615 --rc genhtml_legend=1 00:10:46.615 --rc geninfo_all_blocks=1 00:10:46.615 --rc geninfo_unexecuted_blocks=1 00:10:46.615 00:10:46.615 ' 00:10:46.615 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:46.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.615 --rc genhtml_branch_coverage=1 00:10:46.615 --rc genhtml_function_coverage=1 00:10:46.615 --rc genhtml_legend=1 00:10:46.615 --rc geninfo_all_blocks=1 00:10:46.615 --rc geninfo_unexecuted_blocks=1 00:10:46.615 00:10:46.615 ' 00:10:46.615 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:46.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.615 --rc genhtml_branch_coverage=1 00:10:46.616 --rc genhtml_function_coverage=1 00:10:46.616 --rc genhtml_legend=1 00:10:46.616 --rc geninfo_all_blocks=1 00:10:46.616 --rc geninfo_unexecuted_blocks=1 00:10:46.616 00:10:46.616 ' 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:46.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.616 --rc genhtml_branch_coverage=1 00:10:46.616 --rc genhtml_function_coverage=1 00:10:46.616 --rc genhtml_legend=1 00:10:46.616 --rc geninfo_all_blocks=1 00:10:46.616 --rc geninfo_unexecuted_blocks=1 00:10:46.616 00:10:46.616 ' 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:46.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:46.616 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:10:46.617 07:06:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:53.186 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:53.186 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:10:53.186 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:53.186 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:53.186 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:53.186 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:53.186 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:53.186 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:10:53.186 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:53.186 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:10:53.186 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:10:53.186 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:10:53.186 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:10:53.186 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:10:53.186 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:10:53.186 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:53.186 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:53.186 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:53.186 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:53.186 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:53.186 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:53.186 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:53.186 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:53.186 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:53.186 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:53.186 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:53.186 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:53.186 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:53.186 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:53.186 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:53.186 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:53.186 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:53.186 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:53.186 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:53.186 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:53.186 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:53.186 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:53.186 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:53.186 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:53.186 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:53.186 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:53.186 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:53.186 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:53.186 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:53.187 Found net devices under 0000:86:00.0: cvl_0_0 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:53.187 Found net devices under 0000:86:00.1: cvl_0_1 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:53.187 07:06:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:53.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:53.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.476 ms 00:10:53.187 00:10:53.187 --- 10.0.0.2 ping statistics --- 00:10:53.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.187 rtt min/avg/max/mdev = 0.476/0.476/0.476/0.000 ms 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:53.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:53.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:10:53.187 00:10:53.187 --- 10.0.0.1 ping statistics --- 00:10:53.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.187 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=1108803 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 1108803 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # '[' -z 1108803 ']' 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:53.187 [2024-11-20 07:06:57.163092] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:10:53.187 [2024-11-20 07:06:57.163142] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:53.187 [2024-11-20 07:06:57.242753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:53.187 [2024-11-20 07:06:57.285474] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:53.187 [2024-11-20 07:06:57.285511] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:53.187 [2024-11-20 07:06:57.285518] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:53.187 [2024-11-20 07:06:57.285524] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:53.187 [2024-11-20 07:06:57.285530] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:53.187 [2024-11-20 07:06:57.287023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.187 [2024-11-20 07:06:57.287059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:53.187 [2024-11-20 07:06:57.287168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.187 [2024-11-20 07:06:57.287169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@866 -- # return 0 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:53.187 [2024-11-20 07:06:57.426355] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:53.187 [2024-11-20 07:06:57.454091] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:53.187 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:53.445 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:53.445 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:53.445 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:53.445 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.445 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:53.445 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.445 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:53.445 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.445 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:53.445 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.445 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:53.445 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.446 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:53.446 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.446 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:53.446 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:53.446 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.446 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:53.446 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.446 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:53.446 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:53.446 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:53.446 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:53.446 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:53.446 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:53.446 07:06:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:53.704 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:53.704 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:53.704 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:53.704 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.704 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:53.704 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.704 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:53.704 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.704 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:53.704 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.704 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:53.704 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:53.704 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:53.704 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:53.704 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.704 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:53.704 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:53.704 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.704 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:53.704 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:53.704 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:53.704 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:53.704 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:53.704 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:53.704 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:53.704 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:54.003 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:54.003 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:54.003 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:54.003 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:54.003 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:54.003 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:54.003 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:54.003 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:54.003 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:54.003 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:54.003 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:54.003 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:54.003 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:54.261 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:54.261 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:54.261 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.261 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:54.261 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.261 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:54.261 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:54.261 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:54.261 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:54.261 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.261 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:54.261 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:54.261 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.261 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:54.261 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:54.261 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:54.261 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:54.261 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:54.261 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:54.261 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:54.261 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:54.519 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:54.519 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:54.519 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:54.519 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:54.519 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:54.519 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:54.519 07:06:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:54.777 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:54.777 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:54.777 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:54.777 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:54.777 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:54.777 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:54.777 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:54.777 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:54.777 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.777 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:54.777 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.034 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:55.034 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:55.034 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.034 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:55.034 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.034 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:55.034 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:55.034 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:55.034 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:55.034 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:55.034 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:55.034 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:55.292 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:55.292 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:55.292 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:55.292 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:55.292 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:55.292 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:10:55.292 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:55.292 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:10:55.292 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:55.292 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:55.292 rmmod nvme_tcp 00:10:55.292 rmmod nvme_fabrics 00:10:55.292 rmmod nvme_keyring 00:10:55.292 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:55.292 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:10:55.292 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:10:55.292 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 1108803 ']' 00:10:55.292 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 1108803 00:10:55.292 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' -z 1108803 ']' 00:10:55.292 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # kill -0 1108803 00:10:55.292 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # uname 00:10:55.292 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:55.292 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1108803 00:10:55.292 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:55.292 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:55.292 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1108803' 00:10:55.292 killing process with pid 1108803 00:10:55.292 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@971 -- # kill 1108803 00:10:55.292 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@976 -- # wait 1108803 00:10:55.551 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:55.551 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:55.551 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:55.551 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:10:55.551 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:10:55.551 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:55.551 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:10:55.551 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:55.551 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:55.551 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.551 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:55.551 07:06:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.453 07:07:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:57.453 00:10:57.453 real 0m10.992s 00:10:57.453 user 0m12.906s 00:10:57.453 sys 0m5.155s 00:10:57.454 07:07:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:57.454 07:07:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:57.454 ************************************ 00:10:57.454 END TEST nvmf_referrals 00:10:57.454 ************************************ 00:10:57.454 07:07:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:57.454 07:07:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:57.454 07:07:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:57.454 07:07:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:57.713 ************************************ 00:10:57.713 START TEST nvmf_connect_disconnect 00:10:57.713 ************************************ 00:10:57.713 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:57.713 * Looking for test storage... 00:10:57.713 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:57.713 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:57.713 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:10:57.713 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:57.713 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:57.713 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:57.713 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:57.713 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:57.713 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:10:57.713 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:10:57.713 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:10:57.713 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:10:57.713 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:10:57.713 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:10:57.713 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:10:57.713 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:57.713 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:10:57.713 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:10:57.713 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:57.713 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:57.713 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:10:57.713 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:10:57.713 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:57.713 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:10:57.713 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:10:57.713 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:10:57.713 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:10:57.713 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:57.713 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:10:57.713 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:10:57.713 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:57.713 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:57.713 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:57.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.714 --rc genhtml_branch_coverage=1 00:10:57.714 --rc genhtml_function_coverage=1 00:10:57.714 --rc genhtml_legend=1 00:10:57.714 --rc geninfo_all_blocks=1 00:10:57.714 --rc geninfo_unexecuted_blocks=1 00:10:57.714 00:10:57.714 ' 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:57.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.714 --rc genhtml_branch_coverage=1 00:10:57.714 --rc genhtml_function_coverage=1 00:10:57.714 --rc genhtml_legend=1 00:10:57.714 --rc geninfo_all_blocks=1 00:10:57.714 --rc geninfo_unexecuted_blocks=1 00:10:57.714 00:10:57.714 ' 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:57.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.714 --rc genhtml_branch_coverage=1 00:10:57.714 --rc genhtml_function_coverage=1 00:10:57.714 --rc genhtml_legend=1 00:10:57.714 --rc geninfo_all_blocks=1 00:10:57.714 --rc geninfo_unexecuted_blocks=1 00:10:57.714 00:10:57.714 ' 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:57.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.714 --rc genhtml_branch_coverage=1 00:10:57.714 --rc genhtml_function_coverage=1 00:10:57.714 --rc genhtml_legend=1 00:10:57.714 --rc geninfo_all_blocks=1 00:10:57.714 --rc geninfo_unexecuted_blocks=1 00:10:57.714 00:10:57.714 ' 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:57.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:10:57.714 07:07:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:04.284 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:04.284 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:04.284 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:04.284 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:04.284 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:04.284 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:04.284 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:04.284 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:04.284 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:04.284 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:04.284 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:04.284 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:04.284 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:04.284 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:04.284 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:04.284 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:04.284 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:04.284 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:04.284 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:04.284 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:04.284 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:04.284 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:04.284 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:04.284 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:04.284 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:04.284 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:04.284 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:04.284 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:04.284 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:04.284 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:04.284 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:04.284 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:04.284 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:04.284 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:04.284 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:04.284 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:04.284 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:04.284 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:04.284 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.284 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.284 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:04.284 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:04.284 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:04.284 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:04.284 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:04.284 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:04.285 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.285 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.285 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:04.285 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:04.285 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:04.285 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:04.285 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:04.285 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.285 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:04.285 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:04.285 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:04.285 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:04.285 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.285 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:04.285 Found net devices under 0000:86:00.0: cvl_0_0 00:11:04.285 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.285 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:04.285 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.285 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:04.285 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:04.285 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:04.285 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:04.285 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.285 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:04.285 Found net devices under 0000:86:00.1: cvl_0_1 00:11:04.285 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.285 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:04.285 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:04.285 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:04.285 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:04.285 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:04.285 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:04.285 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:04.285 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:04.285 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:04.285 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:04.285 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:04.285 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:04.285 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:04.285 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:04.285 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:04.285 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:04.285 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:04.285 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:04.285 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:04.285 07:07:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:04.285 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:04.285 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:04.285 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:04.285 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:04.285 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:04.285 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:04.285 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:04.285 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:04.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:04.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.464 ms 00:11:04.285 00:11:04.285 --- 10.0.0.2 ping statistics --- 00:11:04.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.285 rtt min/avg/max/mdev = 0.464/0.464/0.464/0.000 ms 00:11:04.285 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:04.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:04.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:11:04.285 00:11:04.285 --- 10.0.0.1 ping statistics --- 00:11:04.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.285 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:11:04.285 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:04.285 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:04.285 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:04.285 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:04.285 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:04.285 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:04.285 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:04.285 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:04.285 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:04.285 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:04.285 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:04.285 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:04.285 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:04.285 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=1112879 00:11:04.285 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:04.285 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 1112879 00:11:04.285 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # '[' -z 1112879 ']' 00:11:04.285 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.285 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:04.285 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.285 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:04.285 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:04.285 [2024-11-20 07:07:08.266338] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:11:04.285 [2024-11-20 07:07:08.266388] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.285 [2024-11-20 07:07:08.348329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:04.285 [2024-11-20 07:07:08.390862] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:04.285 [2024-11-20 07:07:08.390898] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:04.285 [2024-11-20 07:07:08.390907] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:04.285 [2024-11-20 07:07:08.390913] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:04.285 [2024-11-20 07:07:08.390918] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:04.285 [2024-11-20 07:07:08.392351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:04.285 [2024-11-20 07:07:08.392461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:04.285 [2024-11-20 07:07:08.392564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.285 [2024-11-20 07:07:08.392565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:04.285 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:04.285 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@866 -- # return 0 00:11:04.285 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:04.285 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:04.286 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:04.286 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:04.286 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:04.286 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.286 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:04.286 [2024-11-20 07:07:08.530557] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:04.286 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.286 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:04.286 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.286 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:04.286 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.286 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:04.286 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:04.286 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.286 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:04.286 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.286 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:04.286 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.286 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:04.286 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.286 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:04.286 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.286 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:04.286 [2024-11-20 07:07:08.589348] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:04.286 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.286 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:04.286 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:04.286 07:07:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:07.571 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.853 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.823 07:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:20.823 07:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:20.823 07:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:20.823 07:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:20.823 07:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:20.823 07:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:20.823 07:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:20.823 07:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:20.823 rmmod nvme_tcp 00:11:20.823 rmmod nvme_fabrics 00:11:20.823 rmmod nvme_keyring 00:11:20.823 07:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:20.823 07:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:20.823 07:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:20.823 07:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 1112879 ']' 00:11:20.823 07:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 1112879 00:11:20.823 07:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' -z 1112879 ']' 00:11:20.823 07:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # kill -0 1112879 00:11:20.823 07:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # uname 00:11:20.823 07:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:20.823 07:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1112879 00:11:20.823 07:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:20.823 07:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:20.823 07:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1112879' 00:11:20.823 killing process with pid 1112879 00:11:20.823 07:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # kill 1112879 00:11:20.823 07:07:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@976 -- # wait 1112879 00:11:20.823 07:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:20.823 07:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:20.823 07:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:20.823 07:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:11:20.823 07:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:11:20.823 07:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:11:20.823 07:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:20.823 07:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:20.823 07:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:20.823 07:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.823 07:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:20.823 07:07:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.728 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:22.728 00:11:22.728 real 0m25.188s 00:11:22.728 user 1m8.095s 00:11:22.728 sys 0m5.867s 00:11:22.728 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:22.728 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:22.728 ************************************ 00:11:22.728 END TEST nvmf_connect_disconnect 00:11:22.728 ************************************ 00:11:22.728 07:07:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:22.728 07:07:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:22.728 07:07:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:22.728 07:07:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:22.728 ************************************ 00:11:22.728 START TEST nvmf_multitarget 00:11:22.728 ************************************ 00:11:22.728 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:22.987 * Looking for test storage... 00:11:22.987 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:22.987 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:22.987 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:11:22.987 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:22.987 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:22.987 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:22.987 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:22.987 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:22.987 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:22.987 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:22.987 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:22.987 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:22.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.988 --rc genhtml_branch_coverage=1 00:11:22.988 --rc genhtml_function_coverage=1 00:11:22.988 --rc genhtml_legend=1 00:11:22.988 --rc geninfo_all_blocks=1 00:11:22.988 --rc geninfo_unexecuted_blocks=1 00:11:22.988 00:11:22.988 ' 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:22.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.988 --rc genhtml_branch_coverage=1 00:11:22.988 --rc genhtml_function_coverage=1 00:11:22.988 --rc genhtml_legend=1 00:11:22.988 --rc geninfo_all_blocks=1 00:11:22.988 --rc geninfo_unexecuted_blocks=1 00:11:22.988 00:11:22.988 ' 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:22.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.988 --rc genhtml_branch_coverage=1 00:11:22.988 --rc genhtml_function_coverage=1 00:11:22.988 --rc genhtml_legend=1 00:11:22.988 --rc geninfo_all_blocks=1 00:11:22.988 --rc geninfo_unexecuted_blocks=1 00:11:22.988 00:11:22.988 ' 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:22.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.988 --rc genhtml_branch_coverage=1 00:11:22.988 --rc genhtml_function_coverage=1 00:11:22.988 --rc genhtml_legend=1 00:11:22.988 --rc geninfo_all_blocks=1 00:11:22.988 --rc geninfo_unexecuted_blocks=1 00:11:22.988 00:11:22.988 ' 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:22.988 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:22.988 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:22.989 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:22.989 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.989 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:22.989 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.989 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:22.989 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:22.989 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:11:22.989 07:07:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:29.558 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:29.558 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:11:29.558 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:29.558 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:29.558 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:29.558 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:29.558 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:29.558 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:11:29.558 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:29.558 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:11:29.558 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:11:29.558 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:11:29.558 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:11:29.558 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:11:29.558 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:11:29.558 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:29.558 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:29.558 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:29.558 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:29.558 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:29.558 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:29.558 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:29.558 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:29.558 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:29.558 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:29.558 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:29.558 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:29.558 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:29.558 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:29.558 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:29.558 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:29.558 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:29.559 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:29.559 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:29.559 Found net devices under 0000:86:00.0: cvl_0_0 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:29.559 Found net devices under 0000:86:00.1: cvl_0_1 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:29.559 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:29.559 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:11:29.559 00:11:29.559 --- 10.0.0.2 ping statistics --- 00:11:29.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.559 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:29.559 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:29.559 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:11:29.559 00:11:29.559 --- 10.0.0.1 ping statistics --- 00:11:29.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.559 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:29.559 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:29.560 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:29.560 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=1119241 00:11:29.560 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 1119241 00:11:29.560 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:29.560 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # '[' -z 1119241 ']' 00:11:29.560 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.560 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:29.560 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.560 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:29.560 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:29.560 [2024-11-20 07:07:33.520485] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:11:29.560 [2024-11-20 07:07:33.520528] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:29.560 [2024-11-20 07:07:33.598371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:29.560 [2024-11-20 07:07:33.639156] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:29.560 [2024-11-20 07:07:33.639197] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:29.560 [2024-11-20 07:07:33.639204] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:29.560 [2024-11-20 07:07:33.639209] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:29.560 [2024-11-20 07:07:33.639214] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:29.560 [2024-11-20 07:07:33.640772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:29.560 [2024-11-20 07:07:33.640881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:29.560 [2024-11-20 07:07:33.640996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.560 [2024-11-20 07:07:33.640997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:29.560 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:29.560 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@866 -- # return 0 00:11:29.560 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:29.560 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:29.560 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:29.560 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:29.560 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:29.560 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:29.560 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:29.560 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:29.560 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:29.560 "nvmf_tgt_1" 00:11:29.560 07:07:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:29.560 "nvmf_tgt_2" 00:11:29.560 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:29.560 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:29.821 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:29.821 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:29.821 true 00:11:29.821 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:30.081 true 00:11:30.081 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:30.081 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:30.081 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:30.081 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:30.081 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:30.081 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:30.081 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:11:30.081 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:30.081 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:11:30.081 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:30.081 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:30.081 rmmod nvme_tcp 00:11:30.081 rmmod nvme_fabrics 00:11:30.081 rmmod nvme_keyring 00:11:30.081 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:30.081 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:11:30.081 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:11:30.081 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 1119241 ']' 00:11:30.081 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 1119241 00:11:30.081 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' -z 1119241 ']' 00:11:30.081 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # kill -0 1119241 00:11:30.081 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # uname 00:11:30.081 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:30.081 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1119241 00:11:30.340 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:30.340 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:30.340 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1119241' 00:11:30.340 killing process with pid 1119241 00:11:30.340 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@971 -- # kill 1119241 00:11:30.340 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@976 -- # wait 1119241 00:11:30.340 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:30.341 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:30.341 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:30.341 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:11:30.341 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:30.341 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:11:30.341 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:11:30.341 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:30.341 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:30.341 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.341 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:30.341 07:07:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.878 07:07:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:32.878 00:11:32.878 real 0m9.606s 00:11:32.878 user 0m7.222s 00:11:32.878 sys 0m4.860s 00:11:32.878 07:07:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:32.878 07:07:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:32.878 ************************************ 00:11:32.878 END TEST nvmf_multitarget 00:11:32.878 ************************************ 00:11:32.878 07:07:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:32.878 07:07:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:32.878 07:07:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:32.879 07:07:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:32.879 ************************************ 00:11:32.879 START TEST nvmf_rpc 00:11:32.879 ************************************ 00:11:32.879 07:07:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:32.879 * Looking for test storage... 00:11:32.879 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:32.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.879 --rc genhtml_branch_coverage=1 00:11:32.879 --rc genhtml_function_coverage=1 00:11:32.879 --rc genhtml_legend=1 00:11:32.879 --rc geninfo_all_blocks=1 00:11:32.879 --rc geninfo_unexecuted_blocks=1 00:11:32.879 00:11:32.879 ' 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:32.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.879 --rc genhtml_branch_coverage=1 00:11:32.879 --rc genhtml_function_coverage=1 00:11:32.879 --rc genhtml_legend=1 00:11:32.879 --rc geninfo_all_blocks=1 00:11:32.879 --rc geninfo_unexecuted_blocks=1 00:11:32.879 00:11:32.879 ' 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:32.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.879 --rc genhtml_branch_coverage=1 00:11:32.879 --rc genhtml_function_coverage=1 00:11:32.879 --rc genhtml_legend=1 00:11:32.879 --rc geninfo_all_blocks=1 00:11:32.879 --rc geninfo_unexecuted_blocks=1 00:11:32.879 00:11:32.879 ' 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:32.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:32.879 --rc genhtml_branch_coverage=1 00:11:32.879 --rc genhtml_function_coverage=1 00:11:32.879 --rc genhtml_legend=1 00:11:32.879 --rc geninfo_all_blocks=1 00:11:32.879 --rc geninfo_unexecuted_blocks=1 00:11:32.879 00:11:32.879 ' 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:32.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:32.879 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:32.880 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:32.880 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:32.880 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:32.880 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:32.880 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:32.880 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:32.880 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:32.880 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:32.880 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.880 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:32.880 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.880 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:32.880 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:32.880 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:11:32.880 07:07:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.460 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:39.460 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:11:39.460 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:39.460 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:39.460 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:39.460 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:39.460 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:39.460 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:11:39.460 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:39.460 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:11:39.460 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:11:39.460 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:11:39.460 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:11:39.460 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:11:39.460 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:11:39.460 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:39.460 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:39.460 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:39.460 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:39.460 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:39.460 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:39.460 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:39.460 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:39.460 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:39.460 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:39.460 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:39.460 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:39.460 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:39.460 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:39.460 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:39.460 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:39.461 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:39.461 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:39.461 Found net devices under 0000:86:00.0: cvl_0_0 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:39.461 Found net devices under 0000:86:00.1: cvl_0_1 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:39.461 07:07:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:39.461 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:39.461 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:39.461 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:39.461 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:39.461 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:39.461 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.381 ms 00:11:39.461 00:11:39.461 --- 10.0.0.2 ping statistics --- 00:11:39.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.461 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:11:39.461 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:39.461 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:39.461 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:11:39.461 00:11:39.461 --- 10.0.0.1 ping statistics --- 00:11:39.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.461 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:11:39.461 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:39.461 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:11:39.461 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:39.461 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:39.461 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:39.461 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:39.461 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:39.461 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:39.461 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:39.461 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:39.461 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:39.461 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:39.461 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.461 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=1122893 00:11:39.461 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 1122893 00:11:39.461 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:39.461 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # '[' -z 1122893 ']' 00:11:39.461 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.461 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:39.461 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.461 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:39.461 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.461 [2024-11-20 07:07:43.147450] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:11:39.461 [2024-11-20 07:07:43.147499] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:39.461 [2024-11-20 07:07:43.229321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:39.461 [2024-11-20 07:07:43.273983] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:39.462 [2024-11-20 07:07:43.274022] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:39.462 [2024-11-20 07:07:43.274029] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:39.462 [2024-11-20 07:07:43.274035] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:39.462 [2024-11-20 07:07:43.274040] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:39.462 [2024-11-20 07:07:43.275522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:39.462 [2024-11-20 07:07:43.275637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:39.462 [2024-11-20 07:07:43.275744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.462 [2024-11-20 07:07:43.275744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@866 -- # return 0 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:39.462 "tick_rate": 2300000000, 00:11:39.462 "poll_groups": [ 00:11:39.462 { 00:11:39.462 "name": "nvmf_tgt_poll_group_000", 00:11:39.462 "admin_qpairs": 0, 00:11:39.462 "io_qpairs": 0, 00:11:39.462 "current_admin_qpairs": 0, 00:11:39.462 "current_io_qpairs": 0, 00:11:39.462 "pending_bdev_io": 0, 00:11:39.462 "completed_nvme_io": 0, 00:11:39.462 "transports": [] 00:11:39.462 }, 00:11:39.462 { 00:11:39.462 "name": "nvmf_tgt_poll_group_001", 00:11:39.462 "admin_qpairs": 0, 00:11:39.462 "io_qpairs": 0, 00:11:39.462 "current_admin_qpairs": 0, 00:11:39.462 "current_io_qpairs": 0, 00:11:39.462 "pending_bdev_io": 0, 00:11:39.462 "completed_nvme_io": 0, 00:11:39.462 "transports": [] 00:11:39.462 }, 00:11:39.462 { 00:11:39.462 "name": "nvmf_tgt_poll_group_002", 00:11:39.462 "admin_qpairs": 0, 00:11:39.462 "io_qpairs": 0, 00:11:39.462 "current_admin_qpairs": 0, 00:11:39.462 "current_io_qpairs": 0, 00:11:39.462 "pending_bdev_io": 0, 00:11:39.462 "completed_nvme_io": 0, 00:11:39.462 "transports": [] 00:11:39.462 }, 00:11:39.462 { 00:11:39.462 "name": "nvmf_tgt_poll_group_003", 00:11:39.462 "admin_qpairs": 0, 00:11:39.462 "io_qpairs": 0, 00:11:39.462 "current_admin_qpairs": 0, 00:11:39.462 "current_io_qpairs": 0, 00:11:39.462 "pending_bdev_io": 0, 00:11:39.462 "completed_nvme_io": 0, 00:11:39.462 "transports": [] 00:11:39.462 } 00:11:39.462 ] 00:11:39.462 }' 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.462 [2024-11-20 07:07:43.531238] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:39.462 "tick_rate": 2300000000, 00:11:39.462 "poll_groups": [ 00:11:39.462 { 00:11:39.462 "name": "nvmf_tgt_poll_group_000", 00:11:39.462 "admin_qpairs": 0, 00:11:39.462 "io_qpairs": 0, 00:11:39.462 "current_admin_qpairs": 0, 00:11:39.462 "current_io_qpairs": 0, 00:11:39.462 "pending_bdev_io": 0, 00:11:39.462 "completed_nvme_io": 0, 00:11:39.462 "transports": [ 00:11:39.462 { 00:11:39.462 "trtype": "TCP" 00:11:39.462 } 00:11:39.462 ] 00:11:39.462 }, 00:11:39.462 { 00:11:39.462 "name": "nvmf_tgt_poll_group_001", 00:11:39.462 "admin_qpairs": 0, 00:11:39.462 "io_qpairs": 0, 00:11:39.462 "current_admin_qpairs": 0, 00:11:39.462 "current_io_qpairs": 0, 00:11:39.462 "pending_bdev_io": 0, 00:11:39.462 "completed_nvme_io": 0, 00:11:39.462 "transports": [ 00:11:39.462 { 00:11:39.462 "trtype": "TCP" 00:11:39.462 } 00:11:39.462 ] 00:11:39.462 }, 00:11:39.462 { 00:11:39.462 "name": "nvmf_tgt_poll_group_002", 00:11:39.462 "admin_qpairs": 0, 00:11:39.462 "io_qpairs": 0, 00:11:39.462 "current_admin_qpairs": 0, 00:11:39.462 "current_io_qpairs": 0, 00:11:39.462 "pending_bdev_io": 0, 00:11:39.462 "completed_nvme_io": 0, 00:11:39.462 "transports": [ 00:11:39.462 { 00:11:39.462 "trtype": "TCP" 00:11:39.462 } 00:11:39.462 ] 00:11:39.462 }, 00:11:39.462 { 00:11:39.462 "name": "nvmf_tgt_poll_group_003", 00:11:39.462 "admin_qpairs": 0, 00:11:39.462 "io_qpairs": 0, 00:11:39.462 "current_admin_qpairs": 0, 00:11:39.462 "current_io_qpairs": 0, 00:11:39.462 "pending_bdev_io": 0, 00:11:39.462 "completed_nvme_io": 0, 00:11:39.462 "transports": [ 00:11:39.462 { 00:11:39.462 "trtype": "TCP" 00:11:39.462 } 00:11:39.462 ] 00:11:39.462 } 00:11:39.462 ] 00:11:39.462 }' 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.462 Malloc1 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.462 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.463 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.463 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:39.463 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.463 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.463 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.463 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:39.463 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.463 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.463 [2024-11-20 07:07:43.712173] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:39.463 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.463 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:39.463 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:11:39.463 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:39.463 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:11:39.463 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:39.463 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:11:39.463 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:39.463 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:11:39.463 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:39.463 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:11:39.463 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:11:39.463 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:39.463 [2024-11-20 07:07:43.746854] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:11:39.463 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:39.463 could not add new controller: failed to write to nvme-fabrics device 00:11:39.463 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:11:39.463 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:39.463 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:39.463 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:39.463 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:39.463 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.463 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.463 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.463 07:07:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:40.400 07:07:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:40.400 07:07:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:11:40.400 07:07:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:40.400 07:07:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:40.400 07:07:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:11:42.935 07:07:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:42.935 07:07:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:42.935 07:07:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:42.935 07:07:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:42.935 07:07:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:42.935 07:07:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:11:42.935 07:07:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:42.935 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.935 07:07:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:42.935 07:07:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:11:42.935 07:07:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:42.935 07:07:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:42.935 07:07:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:42.935 07:07:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:42.935 07:07:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:11:42.935 07:07:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:42.935 07:07:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.935 07:07:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.935 07:07:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.935 07:07:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:42.935 07:07:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:11:42.935 07:07:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:42.935 07:07:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:11:42.935 07:07:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:42.935 07:07:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:11:42.935 07:07:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:42.935 07:07:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:11:42.935 07:07:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:42.935 07:07:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:11:42.935 07:07:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:11:42.935 07:07:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:42.935 [2024-11-20 07:07:47.072821] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:11:42.935 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:42.935 could not add new controller: failed to write to nvme-fabrics device 00:11:42.935 07:07:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:11:42.935 07:07:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:42.935 07:07:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:42.935 07:07:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:42.935 07:07:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:42.935 07:07:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.935 07:07:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.935 07:07:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.935 07:07:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:43.873 07:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:43.873 07:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:11:43.873 07:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:43.873 07:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:43.873 07:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:11:45.796 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:45.796 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:45.796 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:45.796 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:45.796 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:45.796 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:11:45.796 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:45.796 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.796 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:45.796 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:11:45.796 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:45.796 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:45.796 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:45.796 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:45.796 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:11:45.796 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:45.796 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.796 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.796 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.796 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:45.796 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:45.796 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:45.796 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.796 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.796 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.796 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:45.797 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.797 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.055 [2024-11-20 07:07:50.348056] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.055 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.055 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:46.055 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.055 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.055 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.055 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:46.055 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.055 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.055 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.055 07:07:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:46.991 07:07:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:46.991 07:07:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:11:46.991 07:07:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:46.991 07:07:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:46.991 07:07:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:11:49.525 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:49.525 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:49.525 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:49.525 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:49.525 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:49.525 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:11:49.526 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:49.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.526 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:49.526 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:11:49.526 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:49.526 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.526 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:49.526 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.526 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:11:49.526 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:49.526 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.526 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.526 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.526 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:49.526 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.526 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.526 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.526 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:49.526 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:49.526 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.526 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.526 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.526 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:49.526 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.526 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.526 [2024-11-20 07:07:53.607655] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:49.526 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.526 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:49.526 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.526 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.526 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.526 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:49.526 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.526 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.526 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.526 07:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:50.461 07:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:50.461 07:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:11:50.461 07:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:50.461 07:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:50.461 07:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:11:52.367 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:52.367 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:52.367 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:52.367 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:52.367 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:52.367 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:11:52.367 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:52.367 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.367 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:52.367 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:11:52.367 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:52.367 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:52.367 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:52.367 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:52.367 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:11:52.367 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:52.367 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.367 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.367 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.367 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:52.367 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.367 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.367 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.367 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:52.367 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:52.367 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.367 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.367 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.367 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:52.367 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.367 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.367 [2024-11-20 07:07:56.913793] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:52.626 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.626 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:52.626 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.626 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.626 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.626 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:52.626 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.626 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.626 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.626 07:07:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:54.003 07:07:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:54.003 07:07:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:11:54.003 07:07:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:54.003 07:07:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:54.003 07:07:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:11:55.907 07:08:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:55.907 07:08:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:55.907 07:08:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:55.907 07:08:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:55.907 07:08:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:55.907 07:08:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:11:55.907 07:08:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:55.907 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.907 07:08:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:55.907 07:08:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:11:55.907 07:08:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:55.907 07:08:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:55.907 07:08:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:55.907 07:08:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:55.907 07:08:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:11:55.907 07:08:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:55.907 07:08:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.907 07:08:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.907 07:08:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.907 07:08:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:55.907 07:08:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.907 07:08:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.907 07:08:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.907 07:08:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:55.907 07:08:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:55.907 07:08:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.907 07:08:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.907 07:08:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.907 07:08:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:55.907 07:08:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.907 07:08:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.907 [2024-11-20 07:08:00.271019] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:55.907 07:08:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.907 07:08:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:55.907 07:08:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.907 07:08:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.907 07:08:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.907 07:08:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:55.907 07:08:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.907 07:08:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.907 07:08:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.907 07:08:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:57.284 07:08:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:57.284 07:08:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:11:57.284 07:08:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:57.284 07:08:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:57.284 07:08:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:11:59.189 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:59.189 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:59.189 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:59.189 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:59.189 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:59.189 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:11:59.189 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:59.189 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.189 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:59.189 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:11:59.189 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:59.189 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:59.189 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:59.189 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:59.189 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:11:59.189 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:59.189 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.189 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.189 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.189 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:59.189 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.189 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.189 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.189 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:59.189 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:59.189 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.189 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.189 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.189 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:59.189 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.189 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.189 [2024-11-20 07:08:03.615405] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.189 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.189 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:59.189 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.189 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.189 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.189 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:59.189 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.189 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.189 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.189 07:08:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:00.566 07:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:00.566 07:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:00.566 07:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:00.566 07:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:00.566 07:08:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:02.471 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:02.471 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:02.471 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:02.471 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:02.471 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:02.471 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:02.471 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:02.471 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.472 [2024-11-20 07:08:06.920563] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.472 [2024-11-20 07:08:06.968692] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.472 07:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.472 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.472 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:02.472 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:02.472 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.472 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.472 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.472 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:02.472 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.472 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.472 [2024-11-20 07:08:07.016821] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:02.472 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.472 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.732 [2024-11-20 07:08:07.065002] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.732 [2024-11-20 07:08:07.113156] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.732 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:02.732 "tick_rate": 2300000000, 00:12:02.732 "poll_groups": [ 00:12:02.732 { 00:12:02.732 "name": "nvmf_tgt_poll_group_000", 00:12:02.732 "admin_qpairs": 2, 00:12:02.732 "io_qpairs": 168, 00:12:02.732 "current_admin_qpairs": 0, 00:12:02.732 "current_io_qpairs": 0, 00:12:02.732 "pending_bdev_io": 0, 00:12:02.732 "completed_nvme_io": 267, 00:12:02.732 "transports": [ 00:12:02.732 { 00:12:02.732 "trtype": "TCP" 00:12:02.732 } 00:12:02.732 ] 00:12:02.732 }, 00:12:02.732 { 00:12:02.732 "name": "nvmf_tgt_poll_group_001", 00:12:02.732 "admin_qpairs": 2, 00:12:02.732 "io_qpairs": 168, 00:12:02.732 "current_admin_qpairs": 0, 00:12:02.732 "current_io_qpairs": 0, 00:12:02.732 "pending_bdev_io": 0, 00:12:02.733 "completed_nvme_io": 220, 00:12:02.733 "transports": [ 00:12:02.733 { 00:12:02.733 "trtype": "TCP" 00:12:02.733 } 00:12:02.733 ] 00:12:02.733 }, 00:12:02.733 { 00:12:02.733 "name": "nvmf_tgt_poll_group_002", 00:12:02.733 "admin_qpairs": 1, 00:12:02.733 "io_qpairs": 168, 00:12:02.733 "current_admin_qpairs": 0, 00:12:02.733 "current_io_qpairs": 0, 00:12:02.733 "pending_bdev_io": 0, 00:12:02.733 "completed_nvme_io": 316, 00:12:02.733 "transports": [ 00:12:02.733 { 00:12:02.733 "trtype": "TCP" 00:12:02.733 } 00:12:02.733 ] 00:12:02.733 }, 00:12:02.733 { 00:12:02.733 "name": "nvmf_tgt_poll_group_003", 00:12:02.733 "admin_qpairs": 2, 00:12:02.733 "io_qpairs": 168, 00:12:02.733 "current_admin_qpairs": 0, 00:12:02.733 "current_io_qpairs": 0, 00:12:02.733 "pending_bdev_io": 0, 00:12:02.733 "completed_nvme_io": 219, 00:12:02.733 "transports": [ 00:12:02.733 { 00:12:02.733 "trtype": "TCP" 00:12:02.733 } 00:12:02.733 ] 00:12:02.733 } 00:12:02.733 ] 00:12:02.733 }' 00:12:02.733 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:02.733 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:02.733 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:02.733 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:02.733 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:02.733 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:02.733 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:02.733 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:02.733 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:02.733 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:12:02.733 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:02.733 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:02.733 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:02.733 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:02.733 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:02.733 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:02.733 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:02.733 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:02.733 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:02.733 rmmod nvme_tcp 00:12:02.999 rmmod nvme_fabrics 00:12:02.999 rmmod nvme_keyring 00:12:02.999 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:02.999 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:02.999 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:02.999 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 1122893 ']' 00:12:02.999 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 1122893 00:12:02.999 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' -z 1122893 ']' 00:12:02.999 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # kill -0 1122893 00:12:02.999 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # uname 00:12:02.999 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:02.999 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1122893 00:12:02.999 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:02.999 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:02.999 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1122893' 00:12:02.999 killing process with pid 1122893 00:12:02.999 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@971 -- # kill 1122893 00:12:02.999 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@976 -- # wait 1122893 00:12:03.261 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:03.261 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:03.261 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:03.261 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:03.261 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:12:03.261 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:03.261 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:12:03.261 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:03.261 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:03.261 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.261 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:03.261 07:08:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.166 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:05.166 00:12:05.166 real 0m32.688s 00:12:05.166 user 1m38.568s 00:12:05.166 sys 0m6.440s 00:12:05.166 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:05.166 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.166 ************************************ 00:12:05.166 END TEST nvmf_rpc 00:12:05.166 ************************************ 00:12:05.166 07:08:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:05.166 07:08:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:05.166 07:08:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:05.166 07:08:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:05.426 ************************************ 00:12:05.426 START TEST nvmf_invalid 00:12:05.426 ************************************ 00:12:05.426 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:05.426 * Looking for test storage... 00:12:05.426 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:05.426 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:05.426 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:12:05.426 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:05.426 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:05.426 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:05.426 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:05.426 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:05.426 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:05.426 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:05.426 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:05.426 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:05.426 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:05.426 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:05.426 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:05.426 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:05.426 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:05.426 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:05.426 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:05.426 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:05.426 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:05.426 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:05.426 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:05.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.427 --rc genhtml_branch_coverage=1 00:12:05.427 --rc genhtml_function_coverage=1 00:12:05.427 --rc genhtml_legend=1 00:12:05.427 --rc geninfo_all_blocks=1 00:12:05.427 --rc geninfo_unexecuted_blocks=1 00:12:05.427 00:12:05.427 ' 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:05.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.427 --rc genhtml_branch_coverage=1 00:12:05.427 --rc genhtml_function_coverage=1 00:12:05.427 --rc genhtml_legend=1 00:12:05.427 --rc geninfo_all_blocks=1 00:12:05.427 --rc geninfo_unexecuted_blocks=1 00:12:05.427 00:12:05.427 ' 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:05.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.427 --rc genhtml_branch_coverage=1 00:12:05.427 --rc genhtml_function_coverage=1 00:12:05.427 --rc genhtml_legend=1 00:12:05.427 --rc geninfo_all_blocks=1 00:12:05.427 --rc geninfo_unexecuted_blocks=1 00:12:05.427 00:12:05.427 ' 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:05.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.427 --rc genhtml_branch_coverage=1 00:12:05.427 --rc genhtml_function_coverage=1 00:12:05.427 --rc genhtml_legend=1 00:12:05.427 --rc geninfo_all_blocks=1 00:12:05.427 --rc geninfo_unexecuted_blocks=1 00:12:05.427 00:12:05.427 ' 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:05.427 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:05.427 07:08:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:11.996 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:11.996 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:11.996 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:11.996 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:11.996 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:11.996 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:11.996 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:11.996 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:11.996 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:11.996 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:11.996 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:11.996 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:11.996 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:11.996 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:11.996 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:11.996 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:11.996 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:11.996 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:11.996 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:11.996 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:11.996 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:11.996 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:11.996 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:11.996 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:11.996 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:11.997 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:11.997 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:11.997 Found net devices under 0000:86:00.0: cvl_0_0 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:11.997 Found net devices under 0000:86:00.1: cvl_0_1 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:11.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:11.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:12:11.997 00:12:11.997 --- 10.0.0.2 ping statistics --- 00:12:11.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.997 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:11.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:11.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:12:11.997 00:12:11.997 --- 10.0.0.1 ping statistics --- 00:12:11.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.997 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=1130657 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 1130657 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # '[' -z 1130657 ']' 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:11.997 07:08:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:11.997 [2024-11-20 07:08:15.922477] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:12:11.998 [2024-11-20 07:08:15.922520] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:11.998 [2024-11-20 07:08:15.988499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:11.998 [2024-11-20 07:08:16.033204] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:11.998 [2024-11-20 07:08:16.033242] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:11.998 [2024-11-20 07:08:16.033250] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:11.998 [2024-11-20 07:08:16.033257] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:11.998 [2024-11-20 07:08:16.033262] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:11.998 [2024-11-20 07:08:16.035966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:11.998 [2024-11-20 07:08:16.036003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:11.998 [2024-11-20 07:08:16.036111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.998 [2024-11-20 07:08:16.036111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:11.998 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:11.998 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@866 -- # return 0 00:12:11.998 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:11.998 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:11.998 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:11.998 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:11.998 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:11.998 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode15892 00:12:11.998 [2024-11-20 07:08:16.343297] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:11.998 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:11.998 { 00:12:11.998 "nqn": "nqn.2016-06.io.spdk:cnode15892", 00:12:11.998 "tgt_name": "foobar", 00:12:11.998 "method": "nvmf_create_subsystem", 00:12:11.998 "req_id": 1 00:12:11.998 } 00:12:11.998 Got JSON-RPC error response 00:12:11.998 response: 00:12:11.998 { 00:12:11.998 "code": -32603, 00:12:11.998 "message": "Unable to find target foobar" 00:12:11.998 }' 00:12:11.998 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:11.998 { 00:12:11.998 "nqn": "nqn.2016-06.io.spdk:cnode15892", 00:12:11.998 "tgt_name": "foobar", 00:12:11.998 "method": "nvmf_create_subsystem", 00:12:11.998 "req_id": 1 00:12:11.998 } 00:12:11.998 Got JSON-RPC error response 00:12:11.998 response: 00:12:11.998 { 00:12:11.998 "code": -32603, 00:12:11.998 "message": "Unable to find target foobar" 00:12:11.998 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:11.998 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:11.998 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode21876 00:12:12.255 [2024-11-20 07:08:16.548017] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21876: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:12.255 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:12.255 { 00:12:12.255 "nqn": "nqn.2016-06.io.spdk:cnode21876", 00:12:12.255 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:12.255 "method": "nvmf_create_subsystem", 00:12:12.255 "req_id": 1 00:12:12.255 } 00:12:12.255 Got JSON-RPC error response 00:12:12.255 response: 00:12:12.255 { 00:12:12.255 "code": -32602, 00:12:12.255 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:12.255 }' 00:12:12.255 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:12.255 { 00:12:12.255 "nqn": "nqn.2016-06.io.spdk:cnode21876", 00:12:12.255 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:12.255 "method": "nvmf_create_subsystem", 00:12:12.255 "req_id": 1 00:12:12.255 } 00:12:12.255 Got JSON-RPC error response 00:12:12.255 response: 00:12:12.255 { 00:12:12.255 "code": -32602, 00:12:12.255 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:12.255 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:12.255 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:12.255 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode2530 00:12:12.255 [2024-11-20 07:08:16.756723] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2530: invalid model number 'SPDK_Controller' 00:12:12.256 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:12.256 { 00:12:12.256 "nqn": "nqn.2016-06.io.spdk:cnode2530", 00:12:12.256 "model_number": "SPDK_Controller\u001f", 00:12:12.256 "method": "nvmf_create_subsystem", 00:12:12.256 "req_id": 1 00:12:12.256 } 00:12:12.256 Got JSON-RPC error response 00:12:12.256 response: 00:12:12.256 { 00:12:12.256 "code": -32602, 00:12:12.256 "message": "Invalid MN SPDK_Controller\u001f" 00:12:12.256 }' 00:12:12.256 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:12.256 { 00:12:12.256 "nqn": "nqn.2016-06.io.spdk:cnode2530", 00:12:12.256 "model_number": "SPDK_Controller\u001f", 00:12:12.256 "method": "nvmf_create_subsystem", 00:12:12.256 "req_id": 1 00:12:12.256 } 00:12:12.256 Got JSON-RPC error response 00:12:12.256 response: 00:12:12.256 { 00:12:12.256 "code": -32602, 00:12:12.256 "message": "Invalid MN SPDK_Controller\u001f" 00:12:12.256 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:12.256 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:12.256 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:12.256 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:12.256 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:12.256 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:12.256 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:12.256 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.256 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:12.256 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:12.256 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:12.256 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.256 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.256 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:12.256 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:12.256 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:12.256 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.256 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ , == \- ]] 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ',u/i8@CRl[2KG8DL>~a'\''X' 00:12:12.514 07:08:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ',u/i8@CRl[2KG8DL>~a'\''X' nqn.2016-06.io.spdk:cnode27408 00:12:12.773 [2024-11-20 07:08:17.105941] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27408: invalid serial number ',u/i8@CRl[2KG8DL>~a'X' 00:12:12.773 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:12.773 { 00:12:12.773 "nqn": "nqn.2016-06.io.spdk:cnode27408", 00:12:12.773 "serial_number": ",u/i8@CRl[2KG8DL>~a'\''X", 00:12:12.773 "method": "nvmf_create_subsystem", 00:12:12.773 "req_id": 1 00:12:12.773 } 00:12:12.773 Got JSON-RPC error response 00:12:12.773 response: 00:12:12.773 { 00:12:12.773 "code": -32602, 00:12:12.773 "message": "Invalid SN ,u/i8@CRl[2KG8DL>~a'\''X" 00:12:12.773 }' 00:12:12.773 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:12.773 { 00:12:12.773 "nqn": "nqn.2016-06.io.spdk:cnode27408", 00:12:12.773 "serial_number": ",u/i8@CRl[2KG8DL>~a'X", 00:12:12.773 "method": "nvmf_create_subsystem", 00:12:12.773 "req_id": 1 00:12:12.773 } 00:12:12.773 Got JSON-RPC error response 00:12:12.773 response: 00:12:12.773 { 00:12:12.773 "code": -32602, 00:12:12.773 "message": "Invalid SN ,u/i8@CRl[2KG8DL>~a'X" 00:12:12.773 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:12.773 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:12.773 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:12.773 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:12.773 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:12.773 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:12.773 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:12.773 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.773 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:12.773 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:12.773 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:12.773 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.773 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.773 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:12:12.773 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:12.773 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:12:12.773 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.773 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.773 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:12:12.773 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:12.773 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:12:12.773 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.773 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.773 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:12.773 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:12.773 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:12.773 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.773 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.773 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:12.773 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:12.773 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:12.773 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.773 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.773 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:12:12.773 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:12.773 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:12:12.773 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.773 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.773 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:12:12.773 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:12.773 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:12:12.773 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.773 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:12:12.774 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ B == \- ]] 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Bf8TJ(*;'\''__2$bL #waw /@5wB(YVw0|G@s"3_bK)' 00:12:13.033 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'Bf8TJ(*;'\''__2$bL #waw /@5wB(YVw0|G@s"3_bK)' nqn.2016-06.io.spdk:cnode5459 00:12:13.033 [2024-11-20 07:08:17.579534] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5459: invalid model number 'Bf8TJ(*;'__2$bL #waw /@5wB(YVw0|G@s"3_bK)' 00:12:13.291 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:13.291 { 00:12:13.291 "nqn": "nqn.2016-06.io.spdk:cnode5459", 00:12:13.291 "model_number": "Bf8TJ(*;'\''__2$bL #waw /@5wB(YVw0|G@s\"3_bK)", 00:12:13.291 "method": "nvmf_create_subsystem", 00:12:13.291 "req_id": 1 00:12:13.291 } 00:12:13.291 Got JSON-RPC error response 00:12:13.291 response: 00:12:13.291 { 00:12:13.291 "code": -32602, 00:12:13.291 "message": "Invalid MN Bf8TJ(*;'\''__2$bL #waw /@5wB(YVw0|G@s\"3_bK)" 00:12:13.291 }' 00:12:13.291 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:13.291 { 00:12:13.291 "nqn": "nqn.2016-06.io.spdk:cnode5459", 00:12:13.291 "model_number": "Bf8TJ(*;'__2$bL #waw /@5wB(YVw0|G@s\"3_bK)", 00:12:13.291 "method": "nvmf_create_subsystem", 00:12:13.291 "req_id": 1 00:12:13.291 } 00:12:13.291 Got JSON-RPC error response 00:12:13.291 response: 00:12:13.291 { 00:12:13.291 "code": -32602, 00:12:13.291 "message": "Invalid MN Bf8TJ(*;'__2$bL #waw /@5wB(YVw0|G@s\"3_bK)" 00:12:13.291 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:13.291 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:13.291 [2024-11-20 07:08:17.784282] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:13.291 07:08:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:13.548 07:08:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:13.548 07:08:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:13.548 07:08:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:13.548 07:08:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:13.548 07:08:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:13.806 [2024-11-20 07:08:18.205672] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:13.806 07:08:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:13.806 { 00:12:13.806 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:13.806 "listen_address": { 00:12:13.806 "trtype": "tcp", 00:12:13.806 "traddr": "", 00:12:13.806 "trsvcid": "4421" 00:12:13.806 }, 00:12:13.806 "method": "nvmf_subsystem_remove_listener", 00:12:13.806 "req_id": 1 00:12:13.806 } 00:12:13.806 Got JSON-RPC error response 00:12:13.806 response: 00:12:13.806 { 00:12:13.806 "code": -32602, 00:12:13.806 "message": "Invalid parameters" 00:12:13.806 }' 00:12:13.806 07:08:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:13.806 { 00:12:13.806 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:13.806 "listen_address": { 00:12:13.806 "trtype": "tcp", 00:12:13.806 "traddr": "", 00:12:13.806 "trsvcid": "4421" 00:12:13.806 }, 00:12:13.806 "method": "nvmf_subsystem_remove_listener", 00:12:13.806 "req_id": 1 00:12:13.806 } 00:12:13.806 Got JSON-RPC error response 00:12:13.806 response: 00:12:13.806 { 00:12:13.806 "code": -32602, 00:12:13.806 "message": "Invalid parameters" 00:12:13.806 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:13.806 07:08:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12490 -i 0 00:12:14.064 [2024-11-20 07:08:18.422368] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12490: invalid cntlid range [0-65519] 00:12:14.064 07:08:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:14.064 { 00:12:14.064 "nqn": "nqn.2016-06.io.spdk:cnode12490", 00:12:14.064 "min_cntlid": 0, 00:12:14.064 "method": "nvmf_create_subsystem", 00:12:14.064 "req_id": 1 00:12:14.064 } 00:12:14.064 Got JSON-RPC error response 00:12:14.064 response: 00:12:14.064 { 00:12:14.064 "code": -32602, 00:12:14.064 "message": "Invalid cntlid range [0-65519]" 00:12:14.064 }' 00:12:14.064 07:08:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:14.064 { 00:12:14.064 "nqn": "nqn.2016-06.io.spdk:cnode12490", 00:12:14.064 "min_cntlid": 0, 00:12:14.064 "method": "nvmf_create_subsystem", 00:12:14.064 "req_id": 1 00:12:14.064 } 00:12:14.064 Got JSON-RPC error response 00:12:14.064 response: 00:12:14.064 { 00:12:14.064 "code": -32602, 00:12:14.064 "message": "Invalid cntlid range [0-65519]" 00:12:14.064 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:14.064 07:08:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30376 -i 65520 00:12:14.322 [2024-11-20 07:08:18.627080] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30376: invalid cntlid range [65520-65519] 00:12:14.322 07:08:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:14.322 { 00:12:14.322 "nqn": "nqn.2016-06.io.spdk:cnode30376", 00:12:14.322 "min_cntlid": 65520, 00:12:14.322 "method": "nvmf_create_subsystem", 00:12:14.322 "req_id": 1 00:12:14.322 } 00:12:14.322 Got JSON-RPC error response 00:12:14.322 response: 00:12:14.322 { 00:12:14.322 "code": -32602, 00:12:14.322 "message": "Invalid cntlid range [65520-65519]" 00:12:14.322 }' 00:12:14.323 07:08:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:14.323 { 00:12:14.323 "nqn": "nqn.2016-06.io.spdk:cnode30376", 00:12:14.323 "min_cntlid": 65520, 00:12:14.323 "method": "nvmf_create_subsystem", 00:12:14.323 "req_id": 1 00:12:14.323 } 00:12:14.323 Got JSON-RPC error response 00:12:14.323 response: 00:12:14.323 { 00:12:14.323 "code": -32602, 00:12:14.323 "message": "Invalid cntlid range [65520-65519]" 00:12:14.323 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:14.323 07:08:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15671 -I 0 00:12:14.323 [2024-11-20 07:08:18.831755] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15671: invalid cntlid range [1-0] 00:12:14.323 07:08:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:14.323 { 00:12:14.323 "nqn": "nqn.2016-06.io.spdk:cnode15671", 00:12:14.323 "max_cntlid": 0, 00:12:14.323 "method": "nvmf_create_subsystem", 00:12:14.323 "req_id": 1 00:12:14.323 } 00:12:14.323 Got JSON-RPC error response 00:12:14.323 response: 00:12:14.323 { 00:12:14.323 "code": -32602, 00:12:14.323 "message": "Invalid cntlid range [1-0]" 00:12:14.323 }' 00:12:14.323 07:08:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:14.323 { 00:12:14.323 "nqn": "nqn.2016-06.io.spdk:cnode15671", 00:12:14.323 "max_cntlid": 0, 00:12:14.323 "method": "nvmf_create_subsystem", 00:12:14.323 "req_id": 1 00:12:14.323 } 00:12:14.323 Got JSON-RPC error response 00:12:14.323 response: 00:12:14.323 { 00:12:14.323 "code": -32602, 00:12:14.323 "message": "Invalid cntlid range [1-0]" 00:12:14.323 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:14.323 07:08:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10549 -I 65520 00:12:14.644 [2024-11-20 07:08:19.028422] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10549: invalid cntlid range [1-65520] 00:12:14.644 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:14.644 { 00:12:14.644 "nqn": "nqn.2016-06.io.spdk:cnode10549", 00:12:14.644 "max_cntlid": 65520, 00:12:14.644 "method": "nvmf_create_subsystem", 00:12:14.644 "req_id": 1 00:12:14.644 } 00:12:14.644 Got JSON-RPC error response 00:12:14.644 response: 00:12:14.644 { 00:12:14.644 "code": -32602, 00:12:14.644 "message": "Invalid cntlid range [1-65520]" 00:12:14.644 }' 00:12:14.644 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:14.644 { 00:12:14.644 "nqn": "nqn.2016-06.io.spdk:cnode10549", 00:12:14.644 "max_cntlid": 65520, 00:12:14.644 "method": "nvmf_create_subsystem", 00:12:14.644 "req_id": 1 00:12:14.644 } 00:12:14.644 Got JSON-RPC error response 00:12:14.644 response: 00:12:14.644 { 00:12:14.644 "code": -32602, 00:12:14.644 "message": "Invalid cntlid range [1-65520]" 00:12:14.644 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:14.644 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5202 -i 6 -I 5 00:12:15.041 [2024-11-20 07:08:19.233109] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5202: invalid cntlid range [6-5] 00:12:15.041 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:15.041 { 00:12:15.041 "nqn": "nqn.2016-06.io.spdk:cnode5202", 00:12:15.041 "min_cntlid": 6, 00:12:15.041 "max_cntlid": 5, 00:12:15.041 "method": "nvmf_create_subsystem", 00:12:15.041 "req_id": 1 00:12:15.041 } 00:12:15.041 Got JSON-RPC error response 00:12:15.041 response: 00:12:15.041 { 00:12:15.041 "code": -32602, 00:12:15.041 "message": "Invalid cntlid range [6-5]" 00:12:15.041 }' 00:12:15.041 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:15.041 { 00:12:15.041 "nqn": "nqn.2016-06.io.spdk:cnode5202", 00:12:15.041 "min_cntlid": 6, 00:12:15.041 "max_cntlid": 5, 00:12:15.041 "method": "nvmf_create_subsystem", 00:12:15.041 "req_id": 1 00:12:15.041 } 00:12:15.041 Got JSON-RPC error response 00:12:15.041 response: 00:12:15.041 { 00:12:15.041 "code": -32602, 00:12:15.041 "message": "Invalid cntlid range [6-5]" 00:12:15.041 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:15.041 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:15.041 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:15.041 { 00:12:15.041 "name": "foobar", 00:12:15.041 "method": "nvmf_delete_target", 00:12:15.041 "req_id": 1 00:12:15.041 } 00:12:15.041 Got JSON-RPC error response 00:12:15.041 response: 00:12:15.041 { 00:12:15.041 "code": -32602, 00:12:15.041 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:15.041 }' 00:12:15.041 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:15.041 { 00:12:15.041 "name": "foobar", 00:12:15.041 "method": "nvmf_delete_target", 00:12:15.041 "req_id": 1 00:12:15.041 } 00:12:15.041 Got JSON-RPC error response 00:12:15.041 response: 00:12:15.041 { 00:12:15.041 "code": -32602, 00:12:15.041 "message": "The specified target doesn't exist, cannot delete it." 00:12:15.041 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:15.041 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:15.041 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:15.041 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:15.041 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:12:15.041 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:15.041 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:12:15.041 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:15.041 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:15.041 rmmod nvme_tcp 00:12:15.041 rmmod nvme_fabrics 00:12:15.041 rmmod nvme_keyring 00:12:15.041 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:15.041 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:12:15.041 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:12:15.041 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 1130657 ']' 00:12:15.041 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 1130657 00:12:15.041 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@952 -- # '[' -z 1130657 ']' 00:12:15.041 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # kill -0 1130657 00:12:15.041 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # uname 00:12:15.041 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:15.041 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1130657 00:12:15.041 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:15.041 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:15.041 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1130657' 00:12:15.041 killing process with pid 1130657 00:12:15.041 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@971 -- # kill 1130657 00:12:15.041 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@976 -- # wait 1130657 00:12:15.301 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:15.301 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:15.301 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:15.301 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:12:15.301 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:12:15.301 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:15.301 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:12:15.301 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:15.301 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:15.301 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.301 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:15.301 07:08:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.205 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:17.205 00:12:17.205 real 0m12.021s 00:12:17.205 user 0m18.835s 00:12:17.205 sys 0m5.341s 00:12:17.205 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:17.205 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:17.205 ************************************ 00:12:17.205 END TEST nvmf_invalid 00:12:17.205 ************************************ 00:12:17.465 07:08:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:17.465 07:08:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:17.465 07:08:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:17.465 07:08:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:17.465 ************************************ 00:12:17.465 START TEST nvmf_connect_stress 00:12:17.465 ************************************ 00:12:17.465 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:17.465 * Looking for test storage... 00:12:17.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:17.465 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:17.465 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:12:17.465 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:17.465 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:17.465 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:17.465 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:17.465 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:17.465 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:17.465 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:17.465 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:17.465 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:17.465 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:17.465 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:17.465 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:17.465 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:17.465 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:17.465 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:17.465 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:17.465 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:17.465 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:17.465 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:17.465 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:17.465 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:17.465 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:17.465 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:17.465 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:17.465 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:17.465 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:17.465 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:17.465 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:17.466 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:17.466 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:17.466 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:17.466 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:17.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.466 --rc genhtml_branch_coverage=1 00:12:17.466 --rc genhtml_function_coverage=1 00:12:17.466 --rc genhtml_legend=1 00:12:17.466 --rc geninfo_all_blocks=1 00:12:17.466 --rc geninfo_unexecuted_blocks=1 00:12:17.466 00:12:17.466 ' 00:12:17.466 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:17.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.466 --rc genhtml_branch_coverage=1 00:12:17.466 --rc genhtml_function_coverage=1 00:12:17.466 --rc genhtml_legend=1 00:12:17.466 --rc geninfo_all_blocks=1 00:12:17.466 --rc geninfo_unexecuted_blocks=1 00:12:17.466 00:12:17.466 ' 00:12:17.466 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:17.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.466 --rc genhtml_branch_coverage=1 00:12:17.466 --rc genhtml_function_coverage=1 00:12:17.466 --rc genhtml_legend=1 00:12:17.466 --rc geninfo_all_blocks=1 00:12:17.466 --rc geninfo_unexecuted_blocks=1 00:12:17.466 00:12:17.466 ' 00:12:17.466 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:17.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.466 --rc genhtml_branch_coverage=1 00:12:17.466 --rc genhtml_function_coverage=1 00:12:17.466 --rc genhtml_legend=1 00:12:17.466 --rc geninfo_all_blocks=1 00:12:17.466 --rc geninfo_unexecuted_blocks=1 00:12:17.466 00:12:17.466 ' 00:12:17.466 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:17.466 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:17.466 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:17.466 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:17.466 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:17.466 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:17.466 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:17.466 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:17.466 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:17.466 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:17.466 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:17.466 07:08:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:17.466 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:17.466 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:17.466 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:17.466 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:17.466 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:17.466 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:17.466 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:17.466 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:17.466 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:17.466 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:17.466 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:17.466 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.466 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.466 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.466 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:17.466 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.466 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:17.466 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:17.466 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:17.466 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:17.466 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:17.466 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:17.466 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:17.466 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:17.466 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:17.466 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:17.466 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:17.726 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:17.726 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:17.726 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:17.726 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:17.726 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:17.726 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:17.726 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.726 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:17.726 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.726 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:17.726 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:17.726 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:17.726 07:08:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:24.295 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:24.295 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:12:24.295 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:24.295 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:24.295 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:24.295 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:24.295 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:24.295 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:12:24.295 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:24.295 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:12:24.295 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:12:24.295 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:12:24.295 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:12:24.295 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:12:24.295 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:12:24.295 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:24.295 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:24.295 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:24.295 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:24.295 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:24.295 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:24.295 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:24.295 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:24.295 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:24.295 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:24.295 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:24.295 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:24.295 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:24.295 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:24.295 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:24.295 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:24.295 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:24.295 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:24.295 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:24.295 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:24.295 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:24.296 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:24.296 Found net devices under 0000:86:00.0: cvl_0_0 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:24.296 Found net devices under 0000:86:00.1: cvl_0_1 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:24.296 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:24.296 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.421 ms 00:12:24.296 00:12:24.296 --- 10.0.0.2 ping statistics --- 00:12:24.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.296 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:24.296 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:24.296 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:12:24.296 00:12:24.296 --- 10.0.0.1 ping statistics --- 00:12:24.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.296 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:24.296 07:08:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:24.296 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:24.296 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:24.296 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:24.296 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:24.296 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=1134871 00:12:24.296 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 1134871 00:12:24.296 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:24.296 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # '[' -z 1134871 ']' 00:12:24.296 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.296 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:24.296 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.296 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:24.296 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:24.296 [2024-11-20 07:08:28.079772] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:12:24.296 [2024-11-20 07:08:28.079820] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:24.296 [2024-11-20 07:08:28.146335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:24.296 [2024-11-20 07:08:28.186367] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:24.296 [2024-11-20 07:08:28.186404] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:24.296 [2024-11-20 07:08:28.186412] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:24.296 [2024-11-20 07:08:28.186418] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:24.296 [2024-11-20 07:08:28.186423] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:24.296 [2024-11-20 07:08:28.187903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:24.297 [2024-11-20 07:08:28.188012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:24.297 [2024-11-20 07:08:28.188013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@866 -- # return 0 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:24.297 [2024-11-20 07:08:28.333085] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:24.297 [2024-11-20 07:08:28.353303] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:24.297 NULL1 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1135073 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1135073 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1135073 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.297 07:08:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:24.864 07:08:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.864 07:08:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1135073 00:12:24.864 07:08:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:24.864 07:08:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.864 07:08:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.122 07:08:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.122 07:08:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1135073 00:12:25.122 07:08:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.122 07:08:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.122 07:08:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.380 07:08:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.380 07:08:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1135073 00:12:25.380 07:08:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.380 07:08:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.380 07:08:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.638 07:08:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.638 07:08:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1135073 00:12:25.638 07:08:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.638 07:08:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.638 07:08:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:25.897 07:08:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.897 07:08:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1135073 00:12:25.897 07:08:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:25.897 07:08:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.897 07:08:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:26.463 07:08:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.463 07:08:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1135073 00:12:26.463 07:08:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:26.463 07:08:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.463 07:08:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:26.722 07:08:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.722 07:08:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1135073 00:12:26.722 07:08:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:26.722 07:08:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.722 07:08:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:26.980 07:08:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.980 07:08:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1135073 00:12:26.980 07:08:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:26.980 07:08:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.980 07:08:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.239 07:08:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.239 07:08:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1135073 00:12:27.239 07:08:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:27.239 07:08:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.239 07:08:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:27.497 07:08:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.497 07:08:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1135073 00:12:27.497 07:08:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:27.497 07:08:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.497 07:08:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:28.064 07:08:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.064 07:08:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1135073 00:12:28.064 07:08:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:28.064 07:08:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.064 07:08:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:28.323 07:08:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.323 07:08:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1135073 00:12:28.323 07:08:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:28.323 07:08:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.323 07:08:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:28.581 07:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.581 07:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1135073 00:12:28.581 07:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:28.581 07:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.581 07:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:28.840 07:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.840 07:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1135073 00:12:28.840 07:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:28.840 07:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.840 07:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:29.407 07:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.407 07:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1135073 00:12:29.407 07:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:29.407 07:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.407 07:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:29.665 07:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.665 07:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1135073 00:12:29.665 07:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:29.665 07:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.665 07:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:29.924 07:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.924 07:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1135073 00:12:29.924 07:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:29.924 07:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.924 07:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:30.182 07:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.182 07:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1135073 00:12:30.182 07:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:30.182 07:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.183 07:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:30.441 07:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.441 07:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1135073 00:12:30.441 07:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:30.441 07:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.441 07:08:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:31.007 07:08:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.007 07:08:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1135073 00:12:31.007 07:08:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:31.007 07:08:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.007 07:08:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:31.266 07:08:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.266 07:08:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1135073 00:12:31.266 07:08:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:31.266 07:08:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.266 07:08:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:31.524 07:08:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.524 07:08:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1135073 00:12:31.524 07:08:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:31.524 07:08:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.524 07:08:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:31.782 07:08:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.782 07:08:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1135073 00:12:31.782 07:08:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:31.782 07:08:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.782 07:08:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:32.041 07:08:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.041 07:08:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1135073 00:12:32.041 07:08:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:32.041 07:08:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.041 07:08:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:32.608 07:08:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.608 07:08:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1135073 00:12:32.608 07:08:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:32.608 07:08:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.608 07:08:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:32.866 07:08:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.866 07:08:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1135073 00:12:32.866 07:08:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:32.866 07:08:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.866 07:08:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:33.125 07:08:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.125 07:08:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1135073 00:12:33.125 07:08:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:33.125 07:08:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.125 07:08:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:33.383 07:08:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.383 07:08:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1135073 00:12:33.383 07:08:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:33.383 07:08:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.383 07:08:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:33.950 07:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.950 07:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1135073 00:12:33.950 07:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:33.950 07:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.950 07:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:33.950 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:34.208 07:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.208 07:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1135073 00:12:34.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1135073) - No such process 00:12:34.208 07:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1135073 00:12:34.208 07:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:34.208 07:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:34.208 07:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:34.208 07:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:34.209 07:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:12:34.209 07:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:34.209 07:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:12:34.209 07:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:34.209 07:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:34.209 rmmod nvme_tcp 00:12:34.209 rmmod nvme_fabrics 00:12:34.209 rmmod nvme_keyring 00:12:34.209 07:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:34.209 07:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:12:34.209 07:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:12:34.209 07:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 1134871 ']' 00:12:34.209 07:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 1134871 00:12:34.209 07:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' -z 1134871 ']' 00:12:34.209 07:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # kill -0 1134871 00:12:34.209 07:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # uname 00:12:34.209 07:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:34.209 07:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1134871 00:12:34.209 07:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:12:34.209 07:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:12:34.209 07:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1134871' 00:12:34.209 killing process with pid 1134871 00:12:34.209 07:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@971 -- # kill 1134871 00:12:34.209 07:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@976 -- # wait 1134871 00:12:34.468 07:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:34.468 07:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:34.468 07:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:34.468 07:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:12:34.468 07:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:12:34.468 07:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:34.468 07:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:12:34.468 07:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:34.468 07:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:34.468 07:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.468 07:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:34.468 07:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.373 07:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:36.373 00:12:36.373 real 0m19.086s 00:12:36.373 user 0m39.332s 00:12:36.373 sys 0m8.523s 00:12:36.373 07:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:36.373 07:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:36.373 ************************************ 00:12:36.373 END TEST nvmf_connect_stress 00:12:36.373 ************************************ 00:12:36.632 07:08:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:36.632 07:08:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:36.632 07:08:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:36.632 07:08:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:36.632 ************************************ 00:12:36.632 START TEST nvmf_fused_ordering 00:12:36.632 ************************************ 00:12:36.632 07:08:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:36.632 * Looking for test storage... 00:12:36.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:36.632 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:36.632 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:12:36.632 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:36.632 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:36.632 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:36.632 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:36.632 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:36.632 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:12:36.632 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:12:36.632 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:12:36.632 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:12:36.632 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:12:36.632 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:12:36.632 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:12:36.632 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:36.632 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:12:36.632 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:12:36.632 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:36.632 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:36.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.633 --rc genhtml_branch_coverage=1 00:12:36.633 --rc genhtml_function_coverage=1 00:12:36.633 --rc genhtml_legend=1 00:12:36.633 --rc geninfo_all_blocks=1 00:12:36.633 --rc geninfo_unexecuted_blocks=1 00:12:36.633 00:12:36.633 ' 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:36.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.633 --rc genhtml_branch_coverage=1 00:12:36.633 --rc genhtml_function_coverage=1 00:12:36.633 --rc genhtml_legend=1 00:12:36.633 --rc geninfo_all_blocks=1 00:12:36.633 --rc geninfo_unexecuted_blocks=1 00:12:36.633 00:12:36.633 ' 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:36.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.633 --rc genhtml_branch_coverage=1 00:12:36.633 --rc genhtml_function_coverage=1 00:12:36.633 --rc genhtml_legend=1 00:12:36.633 --rc geninfo_all_blocks=1 00:12:36.633 --rc geninfo_unexecuted_blocks=1 00:12:36.633 00:12:36.633 ' 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:36.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:36.633 --rc genhtml_branch_coverage=1 00:12:36.633 --rc genhtml_function_coverage=1 00:12:36.633 --rc genhtml_legend=1 00:12:36.633 --rc geninfo_all_blocks=1 00:12:36.633 --rc geninfo_unexecuted_blocks=1 00:12:36.633 00:12:36.633 ' 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:36.633 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:12:36.633 07:08:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:43.204 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:43.204 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:43.204 Found net devices under 0000:86:00.0: cvl_0_0 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:43.204 Found net devices under 0000:86:00.1: cvl_0_1 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:43.204 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:43.205 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:43.205 07:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:43.205 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:43.205 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.432 ms 00:12:43.205 00:12:43.205 --- 10.0.0.2 ping statistics --- 00:12:43.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.205 rtt min/avg/max/mdev = 0.432/0.432/0.432/0.000 ms 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:43.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:43.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:12:43.205 00:12:43.205 --- 10.0.0.1 ping statistics --- 00:12:43.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.205 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=1140228 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 1140228 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # '[' -z 1140228 ']' 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:43.205 [2024-11-20 07:08:47.228128] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:12:43.205 [2024-11-20 07:08:47.228173] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:43.205 [2024-11-20 07:08:47.307602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.205 [2024-11-20 07:08:47.348379] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:43.205 [2024-11-20 07:08:47.348416] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:43.205 [2024-11-20 07:08:47.348424] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:43.205 [2024-11-20 07:08:47.348430] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:43.205 [2024-11-20 07:08:47.348435] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:43.205 [2024-11-20 07:08:47.349002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@866 -- # return 0 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:43.205 [2024-11-20 07:08:47.485745] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:43.205 [2024-11-20 07:08:47.505910] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:43.205 NULL1 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.205 07:08:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:43.205 [2024-11-20 07:08:47.565568] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:12:43.205 [2024-11-20 07:08:47.565618] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1140251 ] 00:12:43.463 Attached to nqn.2016-06.io.spdk:cnode1 00:12:43.463 Namespace ID: 1 size: 1GB 00:12:43.463 fused_ordering(0) 00:12:43.463 fused_ordering(1) 00:12:43.463 fused_ordering(2) 00:12:43.463 fused_ordering(3) 00:12:43.463 fused_ordering(4) 00:12:43.463 fused_ordering(5) 00:12:43.463 fused_ordering(6) 00:12:43.463 fused_ordering(7) 00:12:43.463 fused_ordering(8) 00:12:43.463 fused_ordering(9) 00:12:43.463 fused_ordering(10) 00:12:43.463 fused_ordering(11) 00:12:43.463 fused_ordering(12) 00:12:43.463 fused_ordering(13) 00:12:43.463 fused_ordering(14) 00:12:43.463 fused_ordering(15) 00:12:43.463 fused_ordering(16) 00:12:43.463 fused_ordering(17) 00:12:43.463 fused_ordering(18) 00:12:43.463 fused_ordering(19) 00:12:43.463 fused_ordering(20) 00:12:43.463 fused_ordering(21) 00:12:43.463 fused_ordering(22) 00:12:43.463 fused_ordering(23) 00:12:43.463 fused_ordering(24) 00:12:43.463 fused_ordering(25) 00:12:43.463 fused_ordering(26) 00:12:43.463 fused_ordering(27) 00:12:43.463 fused_ordering(28) 00:12:43.463 fused_ordering(29) 00:12:43.463 fused_ordering(30) 00:12:43.463 fused_ordering(31) 00:12:43.463 fused_ordering(32) 00:12:43.463 fused_ordering(33) 00:12:43.463 fused_ordering(34) 00:12:43.463 fused_ordering(35) 00:12:43.463 fused_ordering(36) 00:12:43.463 fused_ordering(37) 00:12:43.463 fused_ordering(38) 00:12:43.463 fused_ordering(39) 00:12:43.464 fused_ordering(40) 00:12:43.464 fused_ordering(41) 00:12:43.464 fused_ordering(42) 00:12:43.464 fused_ordering(43) 00:12:43.464 fused_ordering(44) 00:12:43.464 fused_ordering(45) 00:12:43.464 fused_ordering(46) 00:12:43.464 fused_ordering(47) 00:12:43.464 fused_ordering(48) 00:12:43.464 fused_ordering(49) 00:12:43.464 fused_ordering(50) 00:12:43.464 fused_ordering(51) 00:12:43.464 fused_ordering(52) 00:12:43.464 fused_ordering(53) 00:12:43.464 fused_ordering(54) 00:12:43.464 fused_ordering(55) 00:12:43.464 fused_ordering(56) 00:12:43.464 fused_ordering(57) 00:12:43.464 fused_ordering(58) 00:12:43.464 fused_ordering(59) 00:12:43.464 fused_ordering(60) 00:12:43.464 fused_ordering(61) 00:12:43.464 fused_ordering(62) 00:12:43.464 fused_ordering(63) 00:12:43.464 fused_ordering(64) 00:12:43.464 fused_ordering(65) 00:12:43.464 fused_ordering(66) 00:12:43.464 fused_ordering(67) 00:12:43.464 fused_ordering(68) 00:12:43.464 fused_ordering(69) 00:12:43.464 fused_ordering(70) 00:12:43.464 fused_ordering(71) 00:12:43.464 fused_ordering(72) 00:12:43.464 fused_ordering(73) 00:12:43.464 fused_ordering(74) 00:12:43.464 fused_ordering(75) 00:12:43.464 fused_ordering(76) 00:12:43.464 fused_ordering(77) 00:12:43.464 fused_ordering(78) 00:12:43.464 fused_ordering(79) 00:12:43.464 fused_ordering(80) 00:12:43.464 fused_ordering(81) 00:12:43.464 fused_ordering(82) 00:12:43.464 fused_ordering(83) 00:12:43.464 fused_ordering(84) 00:12:43.464 fused_ordering(85) 00:12:43.464 fused_ordering(86) 00:12:43.464 fused_ordering(87) 00:12:43.464 fused_ordering(88) 00:12:43.464 fused_ordering(89) 00:12:43.464 fused_ordering(90) 00:12:43.464 fused_ordering(91) 00:12:43.464 fused_ordering(92) 00:12:43.464 fused_ordering(93) 00:12:43.464 fused_ordering(94) 00:12:43.464 fused_ordering(95) 00:12:43.464 fused_ordering(96) 00:12:43.464 fused_ordering(97) 00:12:43.464 fused_ordering(98) 00:12:43.464 fused_ordering(99) 00:12:43.464 fused_ordering(100) 00:12:43.464 fused_ordering(101) 00:12:43.464 fused_ordering(102) 00:12:43.464 fused_ordering(103) 00:12:43.464 fused_ordering(104) 00:12:43.464 fused_ordering(105) 00:12:43.464 fused_ordering(106) 00:12:43.464 fused_ordering(107) 00:12:43.464 fused_ordering(108) 00:12:43.464 fused_ordering(109) 00:12:43.464 fused_ordering(110) 00:12:43.464 fused_ordering(111) 00:12:43.464 fused_ordering(112) 00:12:43.464 fused_ordering(113) 00:12:43.464 fused_ordering(114) 00:12:43.464 fused_ordering(115) 00:12:43.464 fused_ordering(116) 00:12:43.464 fused_ordering(117) 00:12:43.464 fused_ordering(118) 00:12:43.464 fused_ordering(119) 00:12:43.464 fused_ordering(120) 00:12:43.464 fused_ordering(121) 00:12:43.464 fused_ordering(122) 00:12:43.464 fused_ordering(123) 00:12:43.464 fused_ordering(124) 00:12:43.464 fused_ordering(125) 00:12:43.464 fused_ordering(126) 00:12:43.464 fused_ordering(127) 00:12:43.464 fused_ordering(128) 00:12:43.464 fused_ordering(129) 00:12:43.464 fused_ordering(130) 00:12:43.464 fused_ordering(131) 00:12:43.464 fused_ordering(132) 00:12:43.464 fused_ordering(133) 00:12:43.464 fused_ordering(134) 00:12:43.464 fused_ordering(135) 00:12:43.464 fused_ordering(136) 00:12:43.464 fused_ordering(137) 00:12:43.464 fused_ordering(138) 00:12:43.464 fused_ordering(139) 00:12:43.464 fused_ordering(140) 00:12:43.464 fused_ordering(141) 00:12:43.464 fused_ordering(142) 00:12:43.464 fused_ordering(143) 00:12:43.464 fused_ordering(144) 00:12:43.464 fused_ordering(145) 00:12:43.464 fused_ordering(146) 00:12:43.464 fused_ordering(147) 00:12:43.464 fused_ordering(148) 00:12:43.464 fused_ordering(149) 00:12:43.464 fused_ordering(150) 00:12:43.464 fused_ordering(151) 00:12:43.464 fused_ordering(152) 00:12:43.464 fused_ordering(153) 00:12:43.464 fused_ordering(154) 00:12:43.464 fused_ordering(155) 00:12:43.464 fused_ordering(156) 00:12:43.464 fused_ordering(157) 00:12:43.464 fused_ordering(158) 00:12:43.464 fused_ordering(159) 00:12:43.464 fused_ordering(160) 00:12:43.464 fused_ordering(161) 00:12:43.464 fused_ordering(162) 00:12:43.464 fused_ordering(163) 00:12:43.464 fused_ordering(164) 00:12:43.464 fused_ordering(165) 00:12:43.464 fused_ordering(166) 00:12:43.464 fused_ordering(167) 00:12:43.464 fused_ordering(168) 00:12:43.464 fused_ordering(169) 00:12:43.464 fused_ordering(170) 00:12:43.464 fused_ordering(171) 00:12:43.464 fused_ordering(172) 00:12:43.464 fused_ordering(173) 00:12:43.464 fused_ordering(174) 00:12:43.464 fused_ordering(175) 00:12:43.464 fused_ordering(176) 00:12:43.464 fused_ordering(177) 00:12:43.464 fused_ordering(178) 00:12:43.464 fused_ordering(179) 00:12:43.464 fused_ordering(180) 00:12:43.464 fused_ordering(181) 00:12:43.464 fused_ordering(182) 00:12:43.464 fused_ordering(183) 00:12:43.464 fused_ordering(184) 00:12:43.464 fused_ordering(185) 00:12:43.464 fused_ordering(186) 00:12:43.464 fused_ordering(187) 00:12:43.464 fused_ordering(188) 00:12:43.464 fused_ordering(189) 00:12:43.464 fused_ordering(190) 00:12:43.464 fused_ordering(191) 00:12:43.464 fused_ordering(192) 00:12:43.464 fused_ordering(193) 00:12:43.464 fused_ordering(194) 00:12:43.464 fused_ordering(195) 00:12:43.464 fused_ordering(196) 00:12:43.464 fused_ordering(197) 00:12:43.464 fused_ordering(198) 00:12:43.464 fused_ordering(199) 00:12:43.464 fused_ordering(200) 00:12:43.464 fused_ordering(201) 00:12:43.464 fused_ordering(202) 00:12:43.464 fused_ordering(203) 00:12:43.464 fused_ordering(204) 00:12:43.464 fused_ordering(205) 00:12:43.723 fused_ordering(206) 00:12:43.723 fused_ordering(207) 00:12:43.723 fused_ordering(208) 00:12:43.723 fused_ordering(209) 00:12:43.723 fused_ordering(210) 00:12:43.723 fused_ordering(211) 00:12:43.723 fused_ordering(212) 00:12:43.723 fused_ordering(213) 00:12:43.723 fused_ordering(214) 00:12:43.723 fused_ordering(215) 00:12:43.723 fused_ordering(216) 00:12:43.723 fused_ordering(217) 00:12:43.723 fused_ordering(218) 00:12:43.723 fused_ordering(219) 00:12:43.723 fused_ordering(220) 00:12:43.723 fused_ordering(221) 00:12:43.723 fused_ordering(222) 00:12:43.723 fused_ordering(223) 00:12:43.723 fused_ordering(224) 00:12:43.723 fused_ordering(225) 00:12:43.723 fused_ordering(226) 00:12:43.723 fused_ordering(227) 00:12:43.723 fused_ordering(228) 00:12:43.723 fused_ordering(229) 00:12:43.723 fused_ordering(230) 00:12:43.723 fused_ordering(231) 00:12:43.723 fused_ordering(232) 00:12:43.723 fused_ordering(233) 00:12:43.723 fused_ordering(234) 00:12:43.723 fused_ordering(235) 00:12:43.723 fused_ordering(236) 00:12:43.723 fused_ordering(237) 00:12:43.723 fused_ordering(238) 00:12:43.723 fused_ordering(239) 00:12:43.723 fused_ordering(240) 00:12:43.723 fused_ordering(241) 00:12:43.723 fused_ordering(242) 00:12:43.723 fused_ordering(243) 00:12:43.723 fused_ordering(244) 00:12:43.723 fused_ordering(245) 00:12:43.723 fused_ordering(246) 00:12:43.723 fused_ordering(247) 00:12:43.723 fused_ordering(248) 00:12:43.723 fused_ordering(249) 00:12:43.723 fused_ordering(250) 00:12:43.723 fused_ordering(251) 00:12:43.723 fused_ordering(252) 00:12:43.723 fused_ordering(253) 00:12:43.723 fused_ordering(254) 00:12:43.723 fused_ordering(255) 00:12:43.723 fused_ordering(256) 00:12:43.723 fused_ordering(257) 00:12:43.723 fused_ordering(258) 00:12:43.723 fused_ordering(259) 00:12:43.723 fused_ordering(260) 00:12:43.723 fused_ordering(261) 00:12:43.723 fused_ordering(262) 00:12:43.723 fused_ordering(263) 00:12:43.723 fused_ordering(264) 00:12:43.723 fused_ordering(265) 00:12:43.723 fused_ordering(266) 00:12:43.723 fused_ordering(267) 00:12:43.723 fused_ordering(268) 00:12:43.723 fused_ordering(269) 00:12:43.723 fused_ordering(270) 00:12:43.723 fused_ordering(271) 00:12:43.723 fused_ordering(272) 00:12:43.723 fused_ordering(273) 00:12:43.723 fused_ordering(274) 00:12:43.723 fused_ordering(275) 00:12:43.723 fused_ordering(276) 00:12:43.723 fused_ordering(277) 00:12:43.723 fused_ordering(278) 00:12:43.723 fused_ordering(279) 00:12:43.723 fused_ordering(280) 00:12:43.723 fused_ordering(281) 00:12:43.723 fused_ordering(282) 00:12:43.723 fused_ordering(283) 00:12:43.723 fused_ordering(284) 00:12:43.723 fused_ordering(285) 00:12:43.723 fused_ordering(286) 00:12:43.723 fused_ordering(287) 00:12:43.723 fused_ordering(288) 00:12:43.723 fused_ordering(289) 00:12:43.723 fused_ordering(290) 00:12:43.723 fused_ordering(291) 00:12:43.723 fused_ordering(292) 00:12:43.723 fused_ordering(293) 00:12:43.723 fused_ordering(294) 00:12:43.723 fused_ordering(295) 00:12:43.723 fused_ordering(296) 00:12:43.723 fused_ordering(297) 00:12:43.723 fused_ordering(298) 00:12:43.723 fused_ordering(299) 00:12:43.723 fused_ordering(300) 00:12:43.723 fused_ordering(301) 00:12:43.723 fused_ordering(302) 00:12:43.723 fused_ordering(303) 00:12:43.723 fused_ordering(304) 00:12:43.723 fused_ordering(305) 00:12:43.723 fused_ordering(306) 00:12:43.723 fused_ordering(307) 00:12:43.723 fused_ordering(308) 00:12:43.723 fused_ordering(309) 00:12:43.723 fused_ordering(310) 00:12:43.723 fused_ordering(311) 00:12:43.723 fused_ordering(312) 00:12:43.723 fused_ordering(313) 00:12:43.723 fused_ordering(314) 00:12:43.723 fused_ordering(315) 00:12:43.723 fused_ordering(316) 00:12:43.723 fused_ordering(317) 00:12:43.723 fused_ordering(318) 00:12:43.723 fused_ordering(319) 00:12:43.723 fused_ordering(320) 00:12:43.723 fused_ordering(321) 00:12:43.723 fused_ordering(322) 00:12:43.723 fused_ordering(323) 00:12:43.723 fused_ordering(324) 00:12:43.723 fused_ordering(325) 00:12:43.723 fused_ordering(326) 00:12:43.723 fused_ordering(327) 00:12:43.723 fused_ordering(328) 00:12:43.723 fused_ordering(329) 00:12:43.723 fused_ordering(330) 00:12:43.723 fused_ordering(331) 00:12:43.723 fused_ordering(332) 00:12:43.723 fused_ordering(333) 00:12:43.723 fused_ordering(334) 00:12:43.723 fused_ordering(335) 00:12:43.723 fused_ordering(336) 00:12:43.723 fused_ordering(337) 00:12:43.723 fused_ordering(338) 00:12:43.723 fused_ordering(339) 00:12:43.723 fused_ordering(340) 00:12:43.723 fused_ordering(341) 00:12:43.723 fused_ordering(342) 00:12:43.723 fused_ordering(343) 00:12:43.723 fused_ordering(344) 00:12:43.723 fused_ordering(345) 00:12:43.723 fused_ordering(346) 00:12:43.723 fused_ordering(347) 00:12:43.723 fused_ordering(348) 00:12:43.723 fused_ordering(349) 00:12:43.723 fused_ordering(350) 00:12:43.723 fused_ordering(351) 00:12:43.723 fused_ordering(352) 00:12:43.723 fused_ordering(353) 00:12:43.723 fused_ordering(354) 00:12:43.723 fused_ordering(355) 00:12:43.723 fused_ordering(356) 00:12:43.723 fused_ordering(357) 00:12:43.723 fused_ordering(358) 00:12:43.723 fused_ordering(359) 00:12:43.723 fused_ordering(360) 00:12:43.723 fused_ordering(361) 00:12:43.723 fused_ordering(362) 00:12:43.723 fused_ordering(363) 00:12:43.723 fused_ordering(364) 00:12:43.723 fused_ordering(365) 00:12:43.723 fused_ordering(366) 00:12:43.723 fused_ordering(367) 00:12:43.723 fused_ordering(368) 00:12:43.723 fused_ordering(369) 00:12:43.723 fused_ordering(370) 00:12:43.723 fused_ordering(371) 00:12:43.723 fused_ordering(372) 00:12:43.723 fused_ordering(373) 00:12:43.723 fused_ordering(374) 00:12:43.723 fused_ordering(375) 00:12:43.723 fused_ordering(376) 00:12:43.723 fused_ordering(377) 00:12:43.723 fused_ordering(378) 00:12:43.723 fused_ordering(379) 00:12:43.723 fused_ordering(380) 00:12:43.723 fused_ordering(381) 00:12:43.723 fused_ordering(382) 00:12:43.723 fused_ordering(383) 00:12:43.723 fused_ordering(384) 00:12:43.723 fused_ordering(385) 00:12:43.723 fused_ordering(386) 00:12:43.723 fused_ordering(387) 00:12:43.723 fused_ordering(388) 00:12:43.723 fused_ordering(389) 00:12:43.723 fused_ordering(390) 00:12:43.723 fused_ordering(391) 00:12:43.723 fused_ordering(392) 00:12:43.723 fused_ordering(393) 00:12:43.723 fused_ordering(394) 00:12:43.723 fused_ordering(395) 00:12:43.723 fused_ordering(396) 00:12:43.723 fused_ordering(397) 00:12:43.723 fused_ordering(398) 00:12:43.723 fused_ordering(399) 00:12:43.723 fused_ordering(400) 00:12:43.723 fused_ordering(401) 00:12:43.723 fused_ordering(402) 00:12:43.723 fused_ordering(403) 00:12:43.723 fused_ordering(404) 00:12:43.723 fused_ordering(405) 00:12:43.723 fused_ordering(406) 00:12:43.723 fused_ordering(407) 00:12:43.723 fused_ordering(408) 00:12:43.723 fused_ordering(409) 00:12:43.723 fused_ordering(410) 00:12:43.982 fused_ordering(411) 00:12:43.982 fused_ordering(412) 00:12:43.982 fused_ordering(413) 00:12:43.982 fused_ordering(414) 00:12:43.982 fused_ordering(415) 00:12:43.982 fused_ordering(416) 00:12:43.982 fused_ordering(417) 00:12:43.982 fused_ordering(418) 00:12:43.982 fused_ordering(419) 00:12:43.982 fused_ordering(420) 00:12:43.982 fused_ordering(421) 00:12:43.982 fused_ordering(422) 00:12:43.982 fused_ordering(423) 00:12:43.982 fused_ordering(424) 00:12:43.982 fused_ordering(425) 00:12:43.982 fused_ordering(426) 00:12:43.982 fused_ordering(427) 00:12:43.982 fused_ordering(428) 00:12:43.982 fused_ordering(429) 00:12:43.982 fused_ordering(430) 00:12:43.982 fused_ordering(431) 00:12:43.982 fused_ordering(432) 00:12:43.982 fused_ordering(433) 00:12:43.982 fused_ordering(434) 00:12:43.982 fused_ordering(435) 00:12:43.982 fused_ordering(436) 00:12:43.982 fused_ordering(437) 00:12:43.982 fused_ordering(438) 00:12:43.982 fused_ordering(439) 00:12:43.982 fused_ordering(440) 00:12:43.982 fused_ordering(441) 00:12:43.982 fused_ordering(442) 00:12:43.982 fused_ordering(443) 00:12:43.982 fused_ordering(444) 00:12:43.982 fused_ordering(445) 00:12:43.982 fused_ordering(446) 00:12:43.982 fused_ordering(447) 00:12:43.982 fused_ordering(448) 00:12:43.982 fused_ordering(449) 00:12:43.982 fused_ordering(450) 00:12:43.982 fused_ordering(451) 00:12:43.982 fused_ordering(452) 00:12:43.982 fused_ordering(453) 00:12:43.982 fused_ordering(454) 00:12:43.982 fused_ordering(455) 00:12:43.982 fused_ordering(456) 00:12:43.982 fused_ordering(457) 00:12:43.982 fused_ordering(458) 00:12:43.982 fused_ordering(459) 00:12:43.982 fused_ordering(460) 00:12:43.982 fused_ordering(461) 00:12:43.982 fused_ordering(462) 00:12:43.982 fused_ordering(463) 00:12:43.982 fused_ordering(464) 00:12:43.982 fused_ordering(465) 00:12:43.982 fused_ordering(466) 00:12:43.982 fused_ordering(467) 00:12:43.982 fused_ordering(468) 00:12:43.982 fused_ordering(469) 00:12:43.982 fused_ordering(470) 00:12:43.982 fused_ordering(471) 00:12:43.982 fused_ordering(472) 00:12:43.982 fused_ordering(473) 00:12:43.982 fused_ordering(474) 00:12:43.982 fused_ordering(475) 00:12:43.982 fused_ordering(476) 00:12:43.982 fused_ordering(477) 00:12:43.982 fused_ordering(478) 00:12:43.982 fused_ordering(479) 00:12:43.982 fused_ordering(480) 00:12:43.982 fused_ordering(481) 00:12:43.982 fused_ordering(482) 00:12:43.982 fused_ordering(483) 00:12:43.982 fused_ordering(484) 00:12:43.982 fused_ordering(485) 00:12:43.982 fused_ordering(486) 00:12:43.982 fused_ordering(487) 00:12:43.982 fused_ordering(488) 00:12:43.982 fused_ordering(489) 00:12:43.982 fused_ordering(490) 00:12:43.982 fused_ordering(491) 00:12:43.982 fused_ordering(492) 00:12:43.982 fused_ordering(493) 00:12:43.982 fused_ordering(494) 00:12:43.982 fused_ordering(495) 00:12:43.982 fused_ordering(496) 00:12:43.982 fused_ordering(497) 00:12:43.982 fused_ordering(498) 00:12:43.982 fused_ordering(499) 00:12:43.982 fused_ordering(500) 00:12:43.982 fused_ordering(501) 00:12:43.982 fused_ordering(502) 00:12:43.982 fused_ordering(503) 00:12:43.982 fused_ordering(504) 00:12:43.982 fused_ordering(505) 00:12:43.982 fused_ordering(506) 00:12:43.982 fused_ordering(507) 00:12:43.982 fused_ordering(508) 00:12:43.982 fused_ordering(509) 00:12:43.982 fused_ordering(510) 00:12:43.982 fused_ordering(511) 00:12:43.982 fused_ordering(512) 00:12:43.982 fused_ordering(513) 00:12:43.982 fused_ordering(514) 00:12:43.982 fused_ordering(515) 00:12:43.982 fused_ordering(516) 00:12:43.982 fused_ordering(517) 00:12:43.982 fused_ordering(518) 00:12:43.983 fused_ordering(519) 00:12:43.983 fused_ordering(520) 00:12:43.983 fused_ordering(521) 00:12:43.983 fused_ordering(522) 00:12:43.983 fused_ordering(523) 00:12:43.983 fused_ordering(524) 00:12:43.983 fused_ordering(525) 00:12:43.983 fused_ordering(526) 00:12:43.983 fused_ordering(527) 00:12:43.983 fused_ordering(528) 00:12:43.983 fused_ordering(529) 00:12:43.983 fused_ordering(530) 00:12:43.983 fused_ordering(531) 00:12:43.983 fused_ordering(532) 00:12:43.983 fused_ordering(533) 00:12:43.983 fused_ordering(534) 00:12:43.983 fused_ordering(535) 00:12:43.983 fused_ordering(536) 00:12:43.983 fused_ordering(537) 00:12:43.983 fused_ordering(538) 00:12:43.983 fused_ordering(539) 00:12:43.983 fused_ordering(540) 00:12:43.983 fused_ordering(541) 00:12:43.983 fused_ordering(542) 00:12:43.983 fused_ordering(543) 00:12:43.983 fused_ordering(544) 00:12:43.983 fused_ordering(545) 00:12:43.983 fused_ordering(546) 00:12:43.983 fused_ordering(547) 00:12:43.983 fused_ordering(548) 00:12:43.983 fused_ordering(549) 00:12:43.983 fused_ordering(550) 00:12:43.983 fused_ordering(551) 00:12:43.983 fused_ordering(552) 00:12:43.983 fused_ordering(553) 00:12:43.983 fused_ordering(554) 00:12:43.983 fused_ordering(555) 00:12:43.983 fused_ordering(556) 00:12:43.983 fused_ordering(557) 00:12:43.983 fused_ordering(558) 00:12:43.983 fused_ordering(559) 00:12:43.983 fused_ordering(560) 00:12:43.983 fused_ordering(561) 00:12:43.983 fused_ordering(562) 00:12:43.983 fused_ordering(563) 00:12:43.983 fused_ordering(564) 00:12:43.983 fused_ordering(565) 00:12:43.983 fused_ordering(566) 00:12:43.983 fused_ordering(567) 00:12:43.983 fused_ordering(568) 00:12:43.983 fused_ordering(569) 00:12:43.983 fused_ordering(570) 00:12:43.983 fused_ordering(571) 00:12:43.983 fused_ordering(572) 00:12:43.983 fused_ordering(573) 00:12:43.983 fused_ordering(574) 00:12:43.983 fused_ordering(575) 00:12:43.983 fused_ordering(576) 00:12:43.983 fused_ordering(577) 00:12:43.983 fused_ordering(578) 00:12:43.983 fused_ordering(579) 00:12:43.983 fused_ordering(580) 00:12:43.983 fused_ordering(581) 00:12:43.983 fused_ordering(582) 00:12:43.983 fused_ordering(583) 00:12:43.983 fused_ordering(584) 00:12:43.983 fused_ordering(585) 00:12:43.983 fused_ordering(586) 00:12:43.983 fused_ordering(587) 00:12:43.983 fused_ordering(588) 00:12:43.983 fused_ordering(589) 00:12:43.983 fused_ordering(590) 00:12:43.983 fused_ordering(591) 00:12:43.983 fused_ordering(592) 00:12:43.983 fused_ordering(593) 00:12:43.983 fused_ordering(594) 00:12:43.983 fused_ordering(595) 00:12:43.983 fused_ordering(596) 00:12:43.983 fused_ordering(597) 00:12:43.983 fused_ordering(598) 00:12:43.983 fused_ordering(599) 00:12:43.983 fused_ordering(600) 00:12:43.983 fused_ordering(601) 00:12:43.983 fused_ordering(602) 00:12:43.983 fused_ordering(603) 00:12:43.983 fused_ordering(604) 00:12:43.983 fused_ordering(605) 00:12:43.983 fused_ordering(606) 00:12:43.983 fused_ordering(607) 00:12:43.983 fused_ordering(608) 00:12:43.983 fused_ordering(609) 00:12:43.983 fused_ordering(610) 00:12:43.983 fused_ordering(611) 00:12:43.983 fused_ordering(612) 00:12:43.983 fused_ordering(613) 00:12:43.983 fused_ordering(614) 00:12:43.983 fused_ordering(615) 00:12:44.549 fused_ordering(616) 00:12:44.550 fused_ordering(617) 00:12:44.550 fused_ordering(618) 00:12:44.550 fused_ordering(619) 00:12:44.550 fused_ordering(620) 00:12:44.550 fused_ordering(621) 00:12:44.550 fused_ordering(622) 00:12:44.550 fused_ordering(623) 00:12:44.550 fused_ordering(624) 00:12:44.550 fused_ordering(625) 00:12:44.550 fused_ordering(626) 00:12:44.550 fused_ordering(627) 00:12:44.550 fused_ordering(628) 00:12:44.550 fused_ordering(629) 00:12:44.550 fused_ordering(630) 00:12:44.550 fused_ordering(631) 00:12:44.550 fused_ordering(632) 00:12:44.550 fused_ordering(633) 00:12:44.550 fused_ordering(634) 00:12:44.550 fused_ordering(635) 00:12:44.550 fused_ordering(636) 00:12:44.550 fused_ordering(637) 00:12:44.550 fused_ordering(638) 00:12:44.550 fused_ordering(639) 00:12:44.550 fused_ordering(640) 00:12:44.550 fused_ordering(641) 00:12:44.550 fused_ordering(642) 00:12:44.550 fused_ordering(643) 00:12:44.550 fused_ordering(644) 00:12:44.550 fused_ordering(645) 00:12:44.550 fused_ordering(646) 00:12:44.550 fused_ordering(647) 00:12:44.550 fused_ordering(648) 00:12:44.550 fused_ordering(649) 00:12:44.550 fused_ordering(650) 00:12:44.550 fused_ordering(651) 00:12:44.550 fused_ordering(652) 00:12:44.550 fused_ordering(653) 00:12:44.550 fused_ordering(654) 00:12:44.550 fused_ordering(655) 00:12:44.550 fused_ordering(656) 00:12:44.550 fused_ordering(657) 00:12:44.550 fused_ordering(658) 00:12:44.550 fused_ordering(659) 00:12:44.550 fused_ordering(660) 00:12:44.550 fused_ordering(661) 00:12:44.550 fused_ordering(662) 00:12:44.550 fused_ordering(663) 00:12:44.550 fused_ordering(664) 00:12:44.550 fused_ordering(665) 00:12:44.550 fused_ordering(666) 00:12:44.550 fused_ordering(667) 00:12:44.550 fused_ordering(668) 00:12:44.550 fused_ordering(669) 00:12:44.550 fused_ordering(670) 00:12:44.550 fused_ordering(671) 00:12:44.550 fused_ordering(672) 00:12:44.550 fused_ordering(673) 00:12:44.550 fused_ordering(674) 00:12:44.550 fused_ordering(675) 00:12:44.550 fused_ordering(676) 00:12:44.550 fused_ordering(677) 00:12:44.550 fused_ordering(678) 00:12:44.550 fused_ordering(679) 00:12:44.550 fused_ordering(680) 00:12:44.550 fused_ordering(681) 00:12:44.550 fused_ordering(682) 00:12:44.550 fused_ordering(683) 00:12:44.550 fused_ordering(684) 00:12:44.550 fused_ordering(685) 00:12:44.550 fused_ordering(686) 00:12:44.550 fused_ordering(687) 00:12:44.550 fused_ordering(688) 00:12:44.550 fused_ordering(689) 00:12:44.550 fused_ordering(690) 00:12:44.550 fused_ordering(691) 00:12:44.550 fused_ordering(692) 00:12:44.550 fused_ordering(693) 00:12:44.550 fused_ordering(694) 00:12:44.550 fused_ordering(695) 00:12:44.550 fused_ordering(696) 00:12:44.550 fused_ordering(697) 00:12:44.550 fused_ordering(698) 00:12:44.550 fused_ordering(699) 00:12:44.550 fused_ordering(700) 00:12:44.550 fused_ordering(701) 00:12:44.550 fused_ordering(702) 00:12:44.550 fused_ordering(703) 00:12:44.550 fused_ordering(704) 00:12:44.550 fused_ordering(705) 00:12:44.550 fused_ordering(706) 00:12:44.550 fused_ordering(707) 00:12:44.550 fused_ordering(708) 00:12:44.550 fused_ordering(709) 00:12:44.550 fused_ordering(710) 00:12:44.550 fused_ordering(711) 00:12:44.550 fused_ordering(712) 00:12:44.550 fused_ordering(713) 00:12:44.550 fused_ordering(714) 00:12:44.550 fused_ordering(715) 00:12:44.550 fused_ordering(716) 00:12:44.550 fused_ordering(717) 00:12:44.550 fused_ordering(718) 00:12:44.550 fused_ordering(719) 00:12:44.550 fused_ordering(720) 00:12:44.550 fused_ordering(721) 00:12:44.550 fused_ordering(722) 00:12:44.550 fused_ordering(723) 00:12:44.550 fused_ordering(724) 00:12:44.550 fused_ordering(725) 00:12:44.550 fused_ordering(726) 00:12:44.550 fused_ordering(727) 00:12:44.550 fused_ordering(728) 00:12:44.550 fused_ordering(729) 00:12:44.550 fused_ordering(730) 00:12:44.550 fused_ordering(731) 00:12:44.550 fused_ordering(732) 00:12:44.550 fused_ordering(733) 00:12:44.550 fused_ordering(734) 00:12:44.550 fused_ordering(735) 00:12:44.550 fused_ordering(736) 00:12:44.550 fused_ordering(737) 00:12:44.550 fused_ordering(738) 00:12:44.550 fused_ordering(739) 00:12:44.550 fused_ordering(740) 00:12:44.550 fused_ordering(741) 00:12:44.550 fused_ordering(742) 00:12:44.550 fused_ordering(743) 00:12:44.550 fused_ordering(744) 00:12:44.550 fused_ordering(745) 00:12:44.550 fused_ordering(746) 00:12:44.550 fused_ordering(747) 00:12:44.550 fused_ordering(748) 00:12:44.550 fused_ordering(749) 00:12:44.550 fused_ordering(750) 00:12:44.550 fused_ordering(751) 00:12:44.550 fused_ordering(752) 00:12:44.550 fused_ordering(753) 00:12:44.550 fused_ordering(754) 00:12:44.550 fused_ordering(755) 00:12:44.550 fused_ordering(756) 00:12:44.550 fused_ordering(757) 00:12:44.550 fused_ordering(758) 00:12:44.550 fused_ordering(759) 00:12:44.550 fused_ordering(760) 00:12:44.550 fused_ordering(761) 00:12:44.550 fused_ordering(762) 00:12:44.550 fused_ordering(763) 00:12:44.550 fused_ordering(764) 00:12:44.550 fused_ordering(765) 00:12:44.550 fused_ordering(766) 00:12:44.550 fused_ordering(767) 00:12:44.550 fused_ordering(768) 00:12:44.550 fused_ordering(769) 00:12:44.550 fused_ordering(770) 00:12:44.550 fused_ordering(771) 00:12:44.550 fused_ordering(772) 00:12:44.550 fused_ordering(773) 00:12:44.550 fused_ordering(774) 00:12:44.550 fused_ordering(775) 00:12:44.550 fused_ordering(776) 00:12:44.550 fused_ordering(777) 00:12:44.550 fused_ordering(778) 00:12:44.550 fused_ordering(779) 00:12:44.550 fused_ordering(780) 00:12:44.550 fused_ordering(781) 00:12:44.550 fused_ordering(782) 00:12:44.550 fused_ordering(783) 00:12:44.550 fused_ordering(784) 00:12:44.550 fused_ordering(785) 00:12:44.550 fused_ordering(786) 00:12:44.550 fused_ordering(787) 00:12:44.550 fused_ordering(788) 00:12:44.550 fused_ordering(789) 00:12:44.550 fused_ordering(790) 00:12:44.550 fused_ordering(791) 00:12:44.550 fused_ordering(792) 00:12:44.550 fused_ordering(793) 00:12:44.550 fused_ordering(794) 00:12:44.550 fused_ordering(795) 00:12:44.550 fused_ordering(796) 00:12:44.550 fused_ordering(797) 00:12:44.550 fused_ordering(798) 00:12:44.550 fused_ordering(799) 00:12:44.550 fused_ordering(800) 00:12:44.550 fused_ordering(801) 00:12:44.550 fused_ordering(802) 00:12:44.550 fused_ordering(803) 00:12:44.550 fused_ordering(804) 00:12:44.550 fused_ordering(805) 00:12:44.550 fused_ordering(806) 00:12:44.550 fused_ordering(807) 00:12:44.550 fused_ordering(808) 00:12:44.550 fused_ordering(809) 00:12:44.550 fused_ordering(810) 00:12:44.550 fused_ordering(811) 00:12:44.550 fused_ordering(812) 00:12:44.550 fused_ordering(813) 00:12:44.550 fused_ordering(814) 00:12:44.550 fused_ordering(815) 00:12:44.550 fused_ordering(816) 00:12:44.550 fused_ordering(817) 00:12:44.550 fused_ordering(818) 00:12:44.550 fused_ordering(819) 00:12:44.550 fused_ordering(820) 00:12:45.118 fused_ordering(821) 00:12:45.118 fused_ordering(822) 00:12:45.118 fused_ordering(823) 00:12:45.118 fused_ordering(824) 00:12:45.118 fused_ordering(825) 00:12:45.118 fused_ordering(826) 00:12:45.118 fused_ordering(827) 00:12:45.118 fused_ordering(828) 00:12:45.118 fused_ordering(829) 00:12:45.118 fused_ordering(830) 00:12:45.118 fused_ordering(831) 00:12:45.118 fused_ordering(832) 00:12:45.118 fused_ordering(833) 00:12:45.118 fused_ordering(834) 00:12:45.118 fused_ordering(835) 00:12:45.118 fused_ordering(836) 00:12:45.118 fused_ordering(837) 00:12:45.118 fused_ordering(838) 00:12:45.118 fused_ordering(839) 00:12:45.118 fused_ordering(840) 00:12:45.118 fused_ordering(841) 00:12:45.118 fused_ordering(842) 00:12:45.118 fused_ordering(843) 00:12:45.118 fused_ordering(844) 00:12:45.118 fused_ordering(845) 00:12:45.118 fused_ordering(846) 00:12:45.118 fused_ordering(847) 00:12:45.118 fused_ordering(848) 00:12:45.118 fused_ordering(849) 00:12:45.118 fused_ordering(850) 00:12:45.118 fused_ordering(851) 00:12:45.118 fused_ordering(852) 00:12:45.118 fused_ordering(853) 00:12:45.118 fused_ordering(854) 00:12:45.118 fused_ordering(855) 00:12:45.118 fused_ordering(856) 00:12:45.118 fused_ordering(857) 00:12:45.118 fused_ordering(858) 00:12:45.118 fused_ordering(859) 00:12:45.118 fused_ordering(860) 00:12:45.118 fused_ordering(861) 00:12:45.118 fused_ordering(862) 00:12:45.118 fused_ordering(863) 00:12:45.118 fused_ordering(864) 00:12:45.118 fused_ordering(865) 00:12:45.118 fused_ordering(866) 00:12:45.118 fused_ordering(867) 00:12:45.118 fused_ordering(868) 00:12:45.118 fused_ordering(869) 00:12:45.118 fused_ordering(870) 00:12:45.118 fused_ordering(871) 00:12:45.118 fused_ordering(872) 00:12:45.118 fused_ordering(873) 00:12:45.118 fused_ordering(874) 00:12:45.118 fused_ordering(875) 00:12:45.118 fused_ordering(876) 00:12:45.118 fused_ordering(877) 00:12:45.118 fused_ordering(878) 00:12:45.118 fused_ordering(879) 00:12:45.118 fused_ordering(880) 00:12:45.118 fused_ordering(881) 00:12:45.118 fused_ordering(882) 00:12:45.118 fused_ordering(883) 00:12:45.118 fused_ordering(884) 00:12:45.118 fused_ordering(885) 00:12:45.118 fused_ordering(886) 00:12:45.118 fused_ordering(887) 00:12:45.118 fused_ordering(888) 00:12:45.118 fused_ordering(889) 00:12:45.118 fused_ordering(890) 00:12:45.118 fused_ordering(891) 00:12:45.118 fused_ordering(892) 00:12:45.118 fused_ordering(893) 00:12:45.118 fused_ordering(894) 00:12:45.118 fused_ordering(895) 00:12:45.118 fused_ordering(896) 00:12:45.118 fused_ordering(897) 00:12:45.118 fused_ordering(898) 00:12:45.118 fused_ordering(899) 00:12:45.118 fused_ordering(900) 00:12:45.118 fused_ordering(901) 00:12:45.118 fused_ordering(902) 00:12:45.118 fused_ordering(903) 00:12:45.118 fused_ordering(904) 00:12:45.118 fused_ordering(905) 00:12:45.118 fused_ordering(906) 00:12:45.118 fused_ordering(907) 00:12:45.118 fused_ordering(908) 00:12:45.118 fused_ordering(909) 00:12:45.118 fused_ordering(910) 00:12:45.118 fused_ordering(911) 00:12:45.118 fused_ordering(912) 00:12:45.118 fused_ordering(913) 00:12:45.118 fused_ordering(914) 00:12:45.118 fused_ordering(915) 00:12:45.118 fused_ordering(916) 00:12:45.118 fused_ordering(917) 00:12:45.118 fused_ordering(918) 00:12:45.118 fused_ordering(919) 00:12:45.118 fused_ordering(920) 00:12:45.118 fused_ordering(921) 00:12:45.118 fused_ordering(922) 00:12:45.118 fused_ordering(923) 00:12:45.118 fused_ordering(924) 00:12:45.118 fused_ordering(925) 00:12:45.118 fused_ordering(926) 00:12:45.118 fused_ordering(927) 00:12:45.118 fused_ordering(928) 00:12:45.118 fused_ordering(929) 00:12:45.118 fused_ordering(930) 00:12:45.118 fused_ordering(931) 00:12:45.118 fused_ordering(932) 00:12:45.118 fused_ordering(933) 00:12:45.118 fused_ordering(934) 00:12:45.118 fused_ordering(935) 00:12:45.118 fused_ordering(936) 00:12:45.118 fused_ordering(937) 00:12:45.118 fused_ordering(938) 00:12:45.118 fused_ordering(939) 00:12:45.118 fused_ordering(940) 00:12:45.118 fused_ordering(941) 00:12:45.118 fused_ordering(942) 00:12:45.118 fused_ordering(943) 00:12:45.118 fused_ordering(944) 00:12:45.118 fused_ordering(945) 00:12:45.118 fused_ordering(946) 00:12:45.118 fused_ordering(947) 00:12:45.118 fused_ordering(948) 00:12:45.118 fused_ordering(949) 00:12:45.118 fused_ordering(950) 00:12:45.118 fused_ordering(951) 00:12:45.118 fused_ordering(952) 00:12:45.118 fused_ordering(953) 00:12:45.118 fused_ordering(954) 00:12:45.118 fused_ordering(955) 00:12:45.118 fused_ordering(956) 00:12:45.118 fused_ordering(957) 00:12:45.118 fused_ordering(958) 00:12:45.118 fused_ordering(959) 00:12:45.118 fused_ordering(960) 00:12:45.118 fused_ordering(961) 00:12:45.118 fused_ordering(962) 00:12:45.118 fused_ordering(963) 00:12:45.118 fused_ordering(964) 00:12:45.118 fused_ordering(965) 00:12:45.118 fused_ordering(966) 00:12:45.118 fused_ordering(967) 00:12:45.118 fused_ordering(968) 00:12:45.118 fused_ordering(969) 00:12:45.118 fused_ordering(970) 00:12:45.118 fused_ordering(971) 00:12:45.118 fused_ordering(972) 00:12:45.118 fused_ordering(973) 00:12:45.118 fused_ordering(974) 00:12:45.118 fused_ordering(975) 00:12:45.118 fused_ordering(976) 00:12:45.118 fused_ordering(977) 00:12:45.118 fused_ordering(978) 00:12:45.118 fused_ordering(979) 00:12:45.118 fused_ordering(980) 00:12:45.118 fused_ordering(981) 00:12:45.118 fused_ordering(982) 00:12:45.118 fused_ordering(983) 00:12:45.118 fused_ordering(984) 00:12:45.118 fused_ordering(985) 00:12:45.118 fused_ordering(986) 00:12:45.118 fused_ordering(987) 00:12:45.118 fused_ordering(988) 00:12:45.118 fused_ordering(989) 00:12:45.118 fused_ordering(990) 00:12:45.118 fused_ordering(991) 00:12:45.118 fused_ordering(992) 00:12:45.118 fused_ordering(993) 00:12:45.118 fused_ordering(994) 00:12:45.118 fused_ordering(995) 00:12:45.118 fused_ordering(996) 00:12:45.118 fused_ordering(997) 00:12:45.118 fused_ordering(998) 00:12:45.118 fused_ordering(999) 00:12:45.118 fused_ordering(1000) 00:12:45.118 fused_ordering(1001) 00:12:45.118 fused_ordering(1002) 00:12:45.118 fused_ordering(1003) 00:12:45.118 fused_ordering(1004) 00:12:45.118 fused_ordering(1005) 00:12:45.118 fused_ordering(1006) 00:12:45.118 fused_ordering(1007) 00:12:45.118 fused_ordering(1008) 00:12:45.118 fused_ordering(1009) 00:12:45.118 fused_ordering(1010) 00:12:45.118 fused_ordering(1011) 00:12:45.118 fused_ordering(1012) 00:12:45.118 fused_ordering(1013) 00:12:45.118 fused_ordering(1014) 00:12:45.118 fused_ordering(1015) 00:12:45.118 fused_ordering(1016) 00:12:45.118 fused_ordering(1017) 00:12:45.118 fused_ordering(1018) 00:12:45.118 fused_ordering(1019) 00:12:45.118 fused_ordering(1020) 00:12:45.119 fused_ordering(1021) 00:12:45.119 fused_ordering(1022) 00:12:45.119 fused_ordering(1023) 00:12:45.119 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:45.119 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:45.119 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:45.119 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:12:45.119 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:45.119 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:12:45.119 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:45.119 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:45.119 rmmod nvme_tcp 00:12:45.119 rmmod nvme_fabrics 00:12:45.119 rmmod nvme_keyring 00:12:45.119 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:45.119 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:12:45.119 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:12:45.119 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 1140228 ']' 00:12:45.119 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 1140228 00:12:45.119 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' -z 1140228 ']' 00:12:45.119 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # kill -0 1140228 00:12:45.119 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # uname 00:12:45.119 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:45.119 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1140228 00:12:45.119 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:12:45.119 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:12:45.119 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1140228' 00:12:45.119 killing process with pid 1140228 00:12:45.119 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # kill 1140228 00:12:45.119 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@976 -- # wait 1140228 00:12:45.119 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:45.119 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:45.119 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:45.119 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:12:45.119 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:12:45.119 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:45.119 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:12:45.119 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:45.119 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:45.119 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.119 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:45.119 07:08:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.655 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:47.655 00:12:47.655 real 0m10.746s 00:12:47.655 user 0m5.023s 00:12:47.655 sys 0m5.881s 00:12:47.655 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:47.655 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:47.655 ************************************ 00:12:47.655 END TEST nvmf_fused_ordering 00:12:47.655 ************************************ 00:12:47.655 07:08:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:47.655 07:08:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:47.655 07:08:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:47.655 07:08:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:47.655 ************************************ 00:12:47.655 START TEST nvmf_ns_masking 00:12:47.655 ************************************ 00:12:47.655 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1127 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:47.655 * Looking for test storage... 00:12:47.655 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:47.655 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:47.655 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:12:47.655 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:47.655 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:47.655 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:47.655 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:47.655 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:47.655 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:12:47.655 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:12:47.655 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:12:47.655 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:12:47.655 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:12:47.655 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:12:47.655 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:47.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.656 --rc genhtml_branch_coverage=1 00:12:47.656 --rc genhtml_function_coverage=1 00:12:47.656 --rc genhtml_legend=1 00:12:47.656 --rc geninfo_all_blocks=1 00:12:47.656 --rc geninfo_unexecuted_blocks=1 00:12:47.656 00:12:47.656 ' 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:47.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.656 --rc genhtml_branch_coverage=1 00:12:47.656 --rc genhtml_function_coverage=1 00:12:47.656 --rc genhtml_legend=1 00:12:47.656 --rc geninfo_all_blocks=1 00:12:47.656 --rc geninfo_unexecuted_blocks=1 00:12:47.656 00:12:47.656 ' 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:47.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.656 --rc genhtml_branch_coverage=1 00:12:47.656 --rc genhtml_function_coverage=1 00:12:47.656 --rc genhtml_legend=1 00:12:47.656 --rc geninfo_all_blocks=1 00:12:47.656 --rc geninfo_unexecuted_blocks=1 00:12:47.656 00:12:47.656 ' 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:47.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.656 --rc genhtml_branch_coverage=1 00:12:47.656 --rc genhtml_function_coverage=1 00:12:47.656 --rc genhtml_legend=1 00:12:47.656 --rc geninfo_all_blocks=1 00:12:47.656 --rc geninfo_unexecuted_blocks=1 00:12:47.656 00:12:47.656 ' 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:47.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=d85c8779-584d-4c76-8309-aee712632ad0 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=792964e8-be8d-4a39-88f1-832df5427f18 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:47.656 07:08:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:47.656 07:08:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=269c6f2e-818e-4ba3-b5b2-25dbd773dabc 00:12:47.656 07:08:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:47.656 07:08:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:47.657 07:08:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:47.657 07:08:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:47.657 07:08:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:47.657 07:08:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:47.657 07:08:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.657 07:08:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:47.657 07:08:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.657 07:08:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:47.657 07:08:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:47.657 07:08:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:12:47.657 07:08:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:54.226 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:54.226 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:12:54.226 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:54.226 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:54.226 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:54.226 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:54.226 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:54.226 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:12:54.226 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:54.226 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:12:54.226 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:12:54.226 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:12:54.226 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:12:54.226 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:12:54.226 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:12:54.226 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:54.226 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:54.226 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:54.226 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:54.226 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:54.227 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:54.227 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:54.227 Found net devices under 0000:86:00.0: cvl_0_0 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:54.227 Found net devices under 0000:86:00.1: cvl_0_1 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:54.227 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:54.227 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.338 ms 00:12:54.227 00:12:54.227 --- 10.0.0.2 ping statistics --- 00:12:54.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.227 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:54.227 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:54.227 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:12:54.227 00:12:54.227 --- 10.0.0.1 ping statistics --- 00:12:54.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.227 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=1144231 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 1144231 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 1144231 ']' 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.227 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:54.228 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.228 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:54.228 07:08:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:54.228 [2024-11-20 07:08:58.021082] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:12:54.228 [2024-11-20 07:08:58.021129] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.228 [2024-11-20 07:08:58.095295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.228 [2024-11-20 07:08:58.134121] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:54.228 [2024-11-20 07:08:58.134154] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:54.228 [2024-11-20 07:08:58.134161] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:54.228 [2024-11-20 07:08:58.134167] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:54.228 [2024-11-20 07:08:58.134172] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:54.228 [2024-11-20 07:08:58.134729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.228 07:08:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:54.228 07:08:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:12:54.228 07:08:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:54.228 07:08:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:54.228 07:08:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:54.228 07:08:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:54.228 07:08:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:54.228 [2024-11-20 07:08:58.442639] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:54.228 07:08:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:54.228 07:08:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:54.228 07:08:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:54.228 Malloc1 00:12:54.228 07:08:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:54.486 Malloc2 00:12:54.486 07:08:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:54.752 07:08:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:55.018 07:08:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.018 [2024-11-20 07:08:59.490200] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.018 07:08:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:55.018 07:08:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 269c6f2e-818e-4ba3-b5b2-25dbd773dabc -a 10.0.0.2 -s 4420 -i 4 00:12:55.276 07:08:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:55.276 07:08:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:12:55.276 07:08:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:55.276 07:08:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:55.276 07:08:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:12:57.179 07:09:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:57.179 07:09:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:57.179 07:09:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:57.179 07:09:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:57.179 07:09:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:57.179 07:09:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:12:57.179 07:09:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:57.179 07:09:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:57.179 07:09:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:57.179 07:09:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:57.179 07:09:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:57.179 07:09:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:57.179 07:09:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:57.438 [ 0]:0x1 00:12:57.438 07:09:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:57.438 07:09:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:57.438 07:09:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=765ff08e8ede4e808faa77978d0b1c72 00:12:57.438 07:09:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 765ff08e8ede4e808faa77978d0b1c72 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:57.438 07:09:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:57.438 07:09:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:57.438 07:09:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:57.438 07:09:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:57.696 [ 0]:0x1 00:12:57.696 07:09:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:57.696 07:09:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:57.696 07:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=765ff08e8ede4e808faa77978d0b1c72 00:12:57.696 07:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 765ff08e8ede4e808faa77978d0b1c72 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:57.696 07:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:57.696 07:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:57.696 07:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:57.696 [ 1]:0x2 00:12:57.696 07:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:57.696 07:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:57.696 07:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ef9b08859ad74329bb9494e2f56140fc 00:12:57.696 07:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ef9b08859ad74329bb9494e2f56140fc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:57.696 07:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:57.696 07:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:57.696 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.696 07:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.955 07:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:58.213 07:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:58.213 07:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 269c6f2e-818e-4ba3-b5b2-25dbd773dabc -a 10.0.0.2 -s 4420 -i 4 00:12:58.470 07:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:58.470 07:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:12:58.470 07:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:58.470 07:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 1 ]] 00:12:58.470 07:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=1 00:12:58.470 07:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:13:00.373 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:00.373 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:00.373 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:00.373 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:00.373 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:00.373 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:13:00.373 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:00.373 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:00.373 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:00.373 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:00.373 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:00.373 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:00.373 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:00.373 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:00.373 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:00.373 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:00.373 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:00.373 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:00.373 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:00.373 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:00.374 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:00.374 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:00.632 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:00.632 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:00.632 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:00.632 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:00.632 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:00.632 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:00.632 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:00.632 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:00.632 07:09:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:00.632 [ 0]:0x2 00:13:00.632 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:00.632 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:00.632 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ef9b08859ad74329bb9494e2f56140fc 00:13:00.632 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ef9b08859ad74329bb9494e2f56140fc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:00.632 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:00.891 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:00.891 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:00.891 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:00.891 [ 0]:0x1 00:13:00.891 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:00.891 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:00.891 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=765ff08e8ede4e808faa77978d0b1c72 00:13:00.891 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 765ff08e8ede4e808faa77978d0b1c72 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:00.891 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:00.891 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:00.891 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:00.891 [ 1]:0x2 00:13:00.891 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:00.891 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:00.891 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ef9b08859ad74329bb9494e2f56140fc 00:13:00.891 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ef9b08859ad74329bb9494e2f56140fc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:00.891 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:01.150 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:01.150 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:01.150 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:01.150 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:01.150 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:01.150 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:01.150 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:01.150 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:01.150 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:01.150 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:01.150 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:01.150 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:01.150 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:01.150 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:01.150 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:01.150 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:01.150 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:01.150 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:01.150 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:01.150 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:01.150 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:01.150 [ 0]:0x2 00:13:01.150 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:01.150 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:01.150 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ef9b08859ad74329bb9494e2f56140fc 00:13:01.150 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ef9b08859ad74329bb9494e2f56140fc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:01.150 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:01.150 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:01.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.409 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:01.409 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:01.409 07:09:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 269c6f2e-818e-4ba3-b5b2-25dbd773dabc -a 10.0.0.2 -s 4420 -i 4 00:13:01.667 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:01.667 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:13:01.667 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:01.667 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:13:01.667 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:13:01.667 07:09:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:13:03.568 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:03.568 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:03.568 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:03.568 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:13:03.568 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:03.568 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:13:03.568 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:03.568 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:03.826 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:03.826 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:03.826 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:03.826 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:03.826 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:03.826 [ 0]:0x1 00:13:03.826 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:03.826 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:03.826 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=765ff08e8ede4e808faa77978d0b1c72 00:13:03.826 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 765ff08e8ede4e808faa77978d0b1c72 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:03.826 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:03.826 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:03.826 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:03.826 [ 1]:0x2 00:13:03.826 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:03.826 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:03.826 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ef9b08859ad74329bb9494e2f56140fc 00:13:03.826 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ef9b08859ad74329bb9494e2f56140fc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:03.827 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:04.086 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:04.086 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:04.086 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:04.086 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:04.086 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:04.086 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:04.086 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:04.086 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:04.086 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:04.086 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:04.086 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:04.086 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:04.086 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:04.086 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:04.086 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:04.086 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:04.086 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:04.086 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:04.086 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:04.086 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:04.086 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:04.086 [ 0]:0x2 00:13:04.086 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:04.086 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:04.086 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ef9b08859ad74329bb9494e2f56140fc 00:13:04.086 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ef9b08859ad74329bb9494e2f56140fc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:04.086 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:04.086 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:04.086 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:04.086 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:04.086 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:04.086 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:04.086 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:04.086 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:04.086 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:04.086 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:04.086 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:04.086 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:04.345 [2024-11-20 07:09:08.772520] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:04.345 request: 00:13:04.345 { 00:13:04.345 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:04.345 "nsid": 2, 00:13:04.345 "host": "nqn.2016-06.io.spdk:host1", 00:13:04.345 "method": "nvmf_ns_remove_host", 00:13:04.345 "req_id": 1 00:13:04.345 } 00:13:04.345 Got JSON-RPC error response 00:13:04.345 response: 00:13:04.345 { 00:13:04.345 "code": -32602, 00:13:04.345 "message": "Invalid parameters" 00:13:04.345 } 00:13:04.345 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:04.345 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:04.345 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:04.345 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:04.345 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:04.345 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:04.345 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:04.345 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:04.345 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:04.345 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:04.345 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:04.345 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:04.345 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:04.345 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:04.345 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:04.345 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:04.345 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:04.345 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:04.345 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:04.345 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:04.345 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:04.345 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:04.345 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:04.345 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:04.345 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:04.345 [ 0]:0x2 00:13:04.345 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:04.345 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:04.604 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ef9b08859ad74329bb9494e2f56140fc 00:13:04.604 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ef9b08859ad74329bb9494e2f56140fc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:04.604 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:04.604 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:04.604 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.604 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1146435 00:13:04.604 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:04.604 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1146435 /var/tmp/host.sock 00:13:04.605 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:04.605 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 1146435 ']' 00:13:04.605 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:13:04.605 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:04.605 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:04.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:04.605 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:04.605 07:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:04.605 [2024-11-20 07:09:09.024314] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:13:04.605 [2024-11-20 07:09:09.024365] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1146435 ] 00:13:04.605 [2024-11-20 07:09:09.101808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.605 [2024-11-20 07:09:09.145539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:04.862 07:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:04.862 07:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:13:04.862 07:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.120 07:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:05.378 07:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid d85c8779-584d-4c76-8309-aee712632ad0 00:13:05.378 07:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:05.379 07:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g D85C8779584D4C768309AEE712632AD0 -i 00:13:05.379 07:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 792964e8-be8d-4a39-88f1-832df5427f18 00:13:05.379 07:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:05.379 07:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 792964E8BE8D4A3988F1832DF5427F18 -i 00:13:05.638 07:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:05.896 07:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:06.155 07:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:06.155 07:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:06.414 nvme0n1 00:13:06.414 07:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:06.414 07:09:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:06.672 nvme1n2 00:13:06.672 07:09:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:06.672 07:09:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:06.672 07:09:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:06.672 07:09:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:06.672 07:09:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:06.929 07:09:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:06.929 07:09:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:06.929 07:09:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:06.929 07:09:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:07.238 07:09:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ d85c8779-584d-4c76-8309-aee712632ad0 == \d\8\5\c\8\7\7\9\-\5\8\4\d\-\4\c\7\6\-\8\3\0\9\-\a\e\e\7\1\2\6\3\2\a\d\0 ]] 00:13:07.238 07:09:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:07.238 07:09:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:07.238 07:09:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:07.238 07:09:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 792964e8-be8d-4a39-88f1-832df5427f18 == \7\9\2\9\6\4\e\8\-\b\e\8\d\-\4\a\3\9\-\8\8\f\1\-\8\3\2\d\f\5\4\2\7\f\1\8 ]] 00:13:07.238 07:09:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.530 07:09:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:07.833 07:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid d85c8779-584d-4c76-8309-aee712632ad0 00:13:07.833 07:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:07.833 07:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g D85C8779584D4C768309AEE712632AD0 00:13:07.833 07:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:07.833 07:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g D85C8779584D4C768309AEE712632AD0 00:13:07.833 07:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:07.833 07:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:07.833 07:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:07.833 07:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:07.833 07:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:07.833 07:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:07.833 07:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:07.833 07:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:07.833 07:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g D85C8779584D4C768309AEE712632AD0 00:13:07.833 [2024-11-20 07:09:12.346386] bdev.c:8480:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:13:07.833 [2024-11-20 07:09:12.346419] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:13:07.833 [2024-11-20 07:09:12.346428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.833 request: 00:13:07.833 { 00:13:07.833 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:07.833 "namespace": { 00:13:07.833 "bdev_name": "invalid", 00:13:07.833 "nsid": 1, 00:13:07.833 "nguid": "D85C8779584D4C768309AEE712632AD0", 00:13:07.833 "no_auto_visible": false 00:13:07.833 }, 00:13:07.833 "method": "nvmf_subsystem_add_ns", 00:13:07.833 "req_id": 1 00:13:07.833 } 00:13:07.833 Got JSON-RPC error response 00:13:07.833 response: 00:13:07.833 { 00:13:07.833 "code": -32602, 00:13:07.833 "message": "Invalid parameters" 00:13:07.833 } 00:13:07.833 07:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:07.833 07:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:07.833 07:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:07.833 07:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:07.833 07:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid d85c8779-584d-4c76-8309-aee712632ad0 00:13:07.833 07:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:07.833 07:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g D85C8779584D4C768309AEE712632AD0 -i 00:13:08.091 07:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:13:10.625 07:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:13:10.625 07:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:13:10.625 07:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:10.625 07:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:13:10.625 07:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 1146435 00:13:10.625 07:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 1146435 ']' 00:13:10.625 07:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 1146435 00:13:10.625 07:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:13:10.625 07:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:10.625 07:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1146435 00:13:10.625 07:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:10.625 07:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:10.625 07:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1146435' 00:13:10.625 killing process with pid 1146435 00:13:10.625 07:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 1146435 00:13:10.625 07:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 1146435 00:13:10.625 07:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:10.884 07:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:13:10.884 07:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:13:10.884 07:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:10.884 07:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:13:10.884 07:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:10.884 07:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:13:10.884 07:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:10.884 07:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:10.884 rmmod nvme_tcp 00:13:10.884 rmmod nvme_fabrics 00:13:10.884 rmmod nvme_keyring 00:13:10.884 07:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:10.884 07:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:13:10.884 07:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:13:10.884 07:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 1144231 ']' 00:13:10.884 07:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 1144231 00:13:10.884 07:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 1144231 ']' 00:13:10.884 07:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 1144231 00:13:10.884 07:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:13:10.884 07:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:10.884 07:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1144231 00:13:11.143 07:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:11.144 07:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:11.144 07:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1144231' 00:13:11.144 killing process with pid 1144231 00:13:11.144 07:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 1144231 00:13:11.144 07:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 1144231 00:13:11.144 07:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:11.144 07:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:11.144 07:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:11.144 07:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:13:11.144 07:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:13:11.144 07:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:11.144 07:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:13:11.144 07:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:11.144 07:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:11.144 07:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.144 07:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:11.144 07:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.678 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:13.678 00:13:13.678 real 0m25.934s 00:13:13.678 user 0m31.005s 00:13:13.678 sys 0m7.052s 00:13:13.678 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:13.678 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:13.678 ************************************ 00:13:13.678 END TEST nvmf_ns_masking 00:13:13.678 ************************************ 00:13:13.678 07:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:13.678 07:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:13.678 07:09:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:13.678 07:09:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:13.678 07:09:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:13.678 ************************************ 00:13:13.678 START TEST nvmf_nvme_cli 00:13:13.678 ************************************ 00:13:13.678 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:13.678 * Looking for test storage... 00:13:13.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:13.678 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:13.678 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:13:13.678 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:13.678 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:13.678 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:13.678 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:13.678 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:13.678 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:13:13.678 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:13:13.678 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:13:13.678 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:13:13.678 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:13:13.678 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:13:13.678 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:13:13.678 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:13.678 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:13:13.678 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:13:13.678 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:13.678 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:13.678 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:13:13.678 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:13:13.678 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:13.678 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:13:13.678 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:13:13.678 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:13:13.678 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:13:13.678 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:13.678 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:13:13.678 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:13:13.678 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:13.678 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:13.678 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:13:13.679 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:13.679 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:13.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.679 --rc genhtml_branch_coverage=1 00:13:13.679 --rc genhtml_function_coverage=1 00:13:13.679 --rc genhtml_legend=1 00:13:13.679 --rc geninfo_all_blocks=1 00:13:13.679 --rc geninfo_unexecuted_blocks=1 00:13:13.679 00:13:13.679 ' 00:13:13.679 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:13.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.679 --rc genhtml_branch_coverage=1 00:13:13.679 --rc genhtml_function_coverage=1 00:13:13.679 --rc genhtml_legend=1 00:13:13.679 --rc geninfo_all_blocks=1 00:13:13.679 --rc geninfo_unexecuted_blocks=1 00:13:13.679 00:13:13.679 ' 00:13:13.679 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:13.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.679 --rc genhtml_branch_coverage=1 00:13:13.679 --rc genhtml_function_coverage=1 00:13:13.679 --rc genhtml_legend=1 00:13:13.679 --rc geninfo_all_blocks=1 00:13:13.679 --rc geninfo_unexecuted_blocks=1 00:13:13.679 00:13:13.679 ' 00:13:13.679 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:13.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.679 --rc genhtml_branch_coverage=1 00:13:13.679 --rc genhtml_function_coverage=1 00:13:13.679 --rc genhtml_legend=1 00:13:13.679 --rc geninfo_all_blocks=1 00:13:13.679 --rc geninfo_unexecuted_blocks=1 00:13:13.679 00:13:13.679 ' 00:13:13.679 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:13.679 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:13.679 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:13.679 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:13.679 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:13.679 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:13.679 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:13.679 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:13.679 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:13.679 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:13.679 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:13.679 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:13.679 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:13.679 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:13.679 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:13.679 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:13.679 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:13.679 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:13.679 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:13.679 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:13:13.679 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:13.679 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:13.679 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:13.679 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.679 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.679 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.679 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:13.679 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.679 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:13:13.679 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:13.679 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:13.679 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:13.679 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:13.679 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:13.679 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:13.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:13.679 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:13.679 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:13.679 07:09:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:13.679 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:13.679 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:13.679 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:13.679 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:13.679 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:13.679 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:13.679 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:13.679 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:13.679 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:13.679 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.679 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:13.679 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.679 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:13.679 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:13.679 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:13:13.679 07:09:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:20.249 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:20.249 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:13:20.249 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:20.249 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:20.249 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:20.249 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:20.249 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:20.249 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:13:20.249 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:20.249 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:13:20.249 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:13:20.249 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:13:20.249 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:13:20.249 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:13:20.249 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:13:20.249 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:20.249 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:20.249 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:20.249 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:20.249 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:20.249 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:20.249 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:20.249 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:20.249 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:20.249 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:20.249 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:20.249 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:20.249 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:20.249 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:20.249 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:20.249 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:20.249 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:20.249 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:20.249 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:20.249 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:20.249 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:20.249 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:20.249 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:20.249 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.249 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.249 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:20.249 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:20.249 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:20.250 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:20.250 Found net devices under 0000:86:00.0: cvl_0_0 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:20.250 Found net devices under 0000:86:00.1: cvl_0_1 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:20.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:20.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.458 ms 00:13:20.250 00:13:20.250 --- 10.0.0.2 ping statistics --- 00:13:20.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.250 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:20.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:20.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:13:20.250 00:13:20.250 --- 10.0.0.1 ping statistics --- 00:13:20.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.250 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=1151250 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 1151250 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # '[' -z 1151250 ']' 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:20.250 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.251 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:20.251 07:09:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:20.251 [2024-11-20 07:09:23.980736] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:13:20.251 [2024-11-20 07:09:23.980783] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.251 [2024-11-20 07:09:24.060450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:20.251 [2024-11-20 07:09:24.104311] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:20.251 [2024-11-20 07:09:24.104349] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:20.251 [2024-11-20 07:09:24.104356] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:20.251 [2024-11-20 07:09:24.104361] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:20.251 [2024-11-20 07:09:24.104367] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:20.251 [2024-11-20 07:09:24.105943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.251 [2024-11-20 07:09:24.106054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:20.251 [2024-11-20 07:09:24.106160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.251 [2024-11-20 07:09:24.106161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@866 -- # return 0 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:20.251 [2024-11-20 07:09:24.245239] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:20.251 Malloc0 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:20.251 Malloc1 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:20.251 [2024-11-20 07:09:24.339838] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:13:20.251 00:13:20.251 Discovery Log Number of Records 2, Generation counter 2 00:13:20.251 =====Discovery Log Entry 0====== 00:13:20.251 trtype: tcp 00:13:20.251 adrfam: ipv4 00:13:20.251 subtype: current discovery subsystem 00:13:20.251 treq: not required 00:13:20.251 portid: 0 00:13:20.251 trsvcid: 4420 00:13:20.251 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:20.251 traddr: 10.0.0.2 00:13:20.251 eflags: explicit discovery connections, duplicate discovery information 00:13:20.251 sectype: none 00:13:20.251 =====Discovery Log Entry 1====== 00:13:20.251 trtype: tcp 00:13:20.251 adrfam: ipv4 00:13:20.251 subtype: nvme subsystem 00:13:20.251 treq: not required 00:13:20.251 portid: 0 00:13:20.251 trsvcid: 4420 00:13:20.251 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:20.251 traddr: 10.0.0.2 00:13:20.251 eflags: none 00:13:20.251 sectype: none 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:20.251 07:09:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:21.186 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:21.186 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # local i=0 00:13:21.186 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:21.186 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:13:21.186 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:13:21.186 07:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # sleep 2 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # return 0 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:13:23.722 /dev/nvme0n2 ]] 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:23.722 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # local i=0 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1233 -- # return 0 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:23.722 rmmod nvme_tcp 00:13:23.722 rmmod nvme_fabrics 00:13:23.722 rmmod nvme_keyring 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 1151250 ']' 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 1151250 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' -z 1151250 ']' 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # kill -0 1151250 00:13:23.722 07:09:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # uname 00:13:23.722 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:23.722 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1151250 00:13:23.722 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:23.722 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:23.722 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1151250' 00:13:23.722 killing process with pid 1151250 00:13:23.722 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # kill 1151250 00:13:23.722 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@976 -- # wait 1151250 00:13:23.722 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:23.722 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:23.722 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:23.722 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:13:23.722 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:13:23.722 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:23.722 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:13:23.722 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:23.722 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:23.722 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.722 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:23.722 07:09:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.261 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:26.261 00:13:26.262 real 0m12.533s 00:13:26.262 user 0m18.236s 00:13:26.262 sys 0m5.029s 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:26.262 ************************************ 00:13:26.262 END TEST nvmf_nvme_cli 00:13:26.262 ************************************ 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:26.262 ************************************ 00:13:26.262 START TEST nvmf_vfio_user 00:13:26.262 ************************************ 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:26.262 * Looking for test storage... 00:13:26.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:26.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.262 --rc genhtml_branch_coverage=1 00:13:26.262 --rc genhtml_function_coverage=1 00:13:26.262 --rc genhtml_legend=1 00:13:26.262 --rc geninfo_all_blocks=1 00:13:26.262 --rc geninfo_unexecuted_blocks=1 00:13:26.262 00:13:26.262 ' 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:26.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.262 --rc genhtml_branch_coverage=1 00:13:26.262 --rc genhtml_function_coverage=1 00:13:26.262 --rc genhtml_legend=1 00:13:26.262 --rc geninfo_all_blocks=1 00:13:26.262 --rc geninfo_unexecuted_blocks=1 00:13:26.262 00:13:26.262 ' 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:26.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.262 --rc genhtml_branch_coverage=1 00:13:26.262 --rc genhtml_function_coverage=1 00:13:26.262 --rc genhtml_legend=1 00:13:26.262 --rc geninfo_all_blocks=1 00:13:26.262 --rc geninfo_unexecuted_blocks=1 00:13:26.262 00:13:26.262 ' 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:26.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.262 --rc genhtml_branch_coverage=1 00:13:26.262 --rc genhtml_function_coverage=1 00:13:26.262 --rc genhtml_legend=1 00:13:26.262 --rc geninfo_all_blocks=1 00:13:26.262 --rc geninfo_unexecuted_blocks=1 00:13:26.262 00:13:26.262 ' 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:26.262 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:26.263 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:26.263 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:26.263 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:26.263 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:26.263 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:13:26.263 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:26.263 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:26.263 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:26.263 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.263 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.263 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.263 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:13:26.263 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.263 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:13:26.263 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:26.263 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:26.263 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:26.263 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:26.263 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:26.263 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:26.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:26.263 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:26.263 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:26.263 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:26.263 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:26.263 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:26.263 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:26.263 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:26.263 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:26.263 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:26.263 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:26.263 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:26.263 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:26.263 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:26.263 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1152532 00:13:26.263 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1152532' 00:13:26.263 Process pid: 1152532 00:13:26.263 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:26.263 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1152532 00:13:26.263 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:26.263 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 1152532 ']' 00:13:26.263 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.263 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:26.263 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.263 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:26.263 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:26.263 [2024-11-20 07:09:30.671994] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:13:26.263 [2024-11-20 07:09:30.672042] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:26.263 [2024-11-20 07:09:30.746346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:26.263 [2024-11-20 07:09:30.789987] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:26.263 [2024-11-20 07:09:30.790022] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:26.263 [2024-11-20 07:09:30.790031] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:26.263 [2024-11-20 07:09:30.790037] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:26.263 [2024-11-20 07:09:30.790043] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:26.263 [2024-11-20 07:09:30.791492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.263 [2024-11-20 07:09:30.791606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:26.263 [2024-11-20 07:09:30.791631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.263 [2024-11-20 07:09:30.791632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:26.522 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:26.522 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:13:26.522 07:09:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:27.459 07:09:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:27.718 07:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:27.718 07:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:27.718 07:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:27.718 07:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:27.718 07:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:27.976 Malloc1 00:13:27.976 07:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:27.976 07:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:28.235 07:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:28.493 07:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:28.494 07:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:28.494 07:09:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:28.752 Malloc2 00:13:28.752 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:29.011 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:29.011 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:29.270 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:29.270 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:29.270 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:29.270 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:29.270 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:29.270 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:29.270 [2024-11-20 07:09:33.765965] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:13:29.271 [2024-11-20 07:09:33.765997] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1153018 ] 00:13:29.271 [2024-11-20 07:09:33.805886] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:29.271 [2024-11-20 07:09:33.814300] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:29.271 [2024-11-20 07:09:33.814323] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f7c28951000 00:13:29.271 [2024-11-20 07:09:33.815299] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:29.271 [2024-11-20 07:09:33.816298] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:29.271 [2024-11-20 07:09:33.817310] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:29.271 [2024-11-20 07:09:33.818316] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:29.271 [2024-11-20 07:09:33.819319] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:29.271 [2024-11-20 07:09:33.820317] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:29.271 [2024-11-20 07:09:33.821329] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:29.538 [2024-11-20 07:09:33.822334] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:29.538 [2024-11-20 07:09:33.823348] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:29.538 [2024-11-20 07:09:33.823358] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f7c28946000 00:13:29.538 [2024-11-20 07:09:33.824309] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:29.538 [2024-11-20 07:09:33.833906] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:29.538 [2024-11-20 07:09:33.833934] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:13:29.538 [2024-11-20 07:09:33.839436] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:29.538 [2024-11-20 07:09:33.839472] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:29.538 [2024-11-20 07:09:33.839541] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:13:29.538 [2024-11-20 07:09:33.839554] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:13:29.538 [2024-11-20 07:09:33.839559] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:13:29.538 [2024-11-20 07:09:33.840437] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:29.538 [2024-11-20 07:09:33.840446] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:13:29.538 [2024-11-20 07:09:33.840452] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:13:29.538 [2024-11-20 07:09:33.841443] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:29.538 [2024-11-20 07:09:33.841451] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:13:29.538 [2024-11-20 07:09:33.841458] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:29.538 [2024-11-20 07:09:33.842452] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:29.538 [2024-11-20 07:09:33.842460] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:29.538 [2024-11-20 07:09:33.843454] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:29.538 [2024-11-20 07:09:33.843462] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:29.538 [2024-11-20 07:09:33.843466] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:29.538 [2024-11-20 07:09:33.843472] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:29.538 [2024-11-20 07:09:33.843580] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:13:29.538 [2024-11-20 07:09:33.843584] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:29.538 [2024-11-20 07:09:33.843589] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:29.538 [2024-11-20 07:09:33.844463] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:29.538 [2024-11-20 07:09:33.845469] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:29.538 [2024-11-20 07:09:33.846476] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:29.538 [2024-11-20 07:09:33.847475] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:29.538 [2024-11-20 07:09:33.847538] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:29.538 [2024-11-20 07:09:33.848490] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:29.538 [2024-11-20 07:09:33.848498] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:29.538 [2024-11-20 07:09:33.848502] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:29.538 [2024-11-20 07:09:33.848519] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:13:29.538 [2024-11-20 07:09:33.848525] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:29.538 [2024-11-20 07:09:33.848538] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:29.538 [2024-11-20 07:09:33.848543] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:29.538 [2024-11-20 07:09:33.848546] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:29.539 [2024-11-20 07:09:33.848558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:29.539 [2024-11-20 07:09:33.848596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:29.539 [2024-11-20 07:09:33.848605] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:13:29.539 [2024-11-20 07:09:33.848610] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:13:29.539 [2024-11-20 07:09:33.848614] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:13:29.539 [2024-11-20 07:09:33.848618] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:29.539 [2024-11-20 07:09:33.848626] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:13:29.539 [2024-11-20 07:09:33.848630] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:13:29.539 [2024-11-20 07:09:33.848635] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:13:29.539 [2024-11-20 07:09:33.848643] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:29.539 [2024-11-20 07:09:33.848652] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:29.539 [2024-11-20 07:09:33.848662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:29.539 [2024-11-20 07:09:33.848672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:29.539 [2024-11-20 07:09:33.848680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:29.539 [2024-11-20 07:09:33.848689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:29.539 [2024-11-20 07:09:33.848696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:29.539 [2024-11-20 07:09:33.848701] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:29.539 [2024-11-20 07:09:33.848707] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:29.539 [2024-11-20 07:09:33.848715] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:29.539 [2024-11-20 07:09:33.848723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:29.539 [2024-11-20 07:09:33.848729] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:13:29.539 [2024-11-20 07:09:33.848734] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:29.539 [2024-11-20 07:09:33.848741] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:13:29.539 [2024-11-20 07:09:33.848746] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:29.539 [2024-11-20 07:09:33.848754] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:29.539 [2024-11-20 07:09:33.848763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:29.539 [2024-11-20 07:09:33.848814] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:13:29.539 [2024-11-20 07:09:33.848821] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:29.539 [2024-11-20 07:09:33.848828] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:29.539 [2024-11-20 07:09:33.848832] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:29.539 [2024-11-20 07:09:33.848835] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:29.539 [2024-11-20 07:09:33.848841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:29.539 [2024-11-20 07:09:33.848854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:29.539 [2024-11-20 07:09:33.848866] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:13:29.539 [2024-11-20 07:09:33.848873] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:13:29.539 [2024-11-20 07:09:33.848880] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:29.539 [2024-11-20 07:09:33.848886] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:29.539 [2024-11-20 07:09:33.848890] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:29.539 [2024-11-20 07:09:33.848893] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:29.539 [2024-11-20 07:09:33.848899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:29.539 [2024-11-20 07:09:33.848922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:29.539 [2024-11-20 07:09:33.848932] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:29.539 [2024-11-20 07:09:33.848939] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:29.539 [2024-11-20 07:09:33.848945] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:29.539 [2024-11-20 07:09:33.848954] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:29.539 [2024-11-20 07:09:33.848957] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:29.539 [2024-11-20 07:09:33.848963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:29.539 [2024-11-20 07:09:33.848976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:29.539 [2024-11-20 07:09:33.848983] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:29.539 [2024-11-20 07:09:33.848989] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:29.539 [2024-11-20 07:09:33.848995] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:13:29.539 [2024-11-20 07:09:33.849000] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:29.539 [2024-11-20 07:09:33.849005] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:29.539 [2024-11-20 07:09:33.849009] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:13:29.539 [2024-11-20 07:09:33.849013] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:29.539 [2024-11-20 07:09:33.849018] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:13:29.539 [2024-11-20 07:09:33.849022] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:13:29.539 [2024-11-20 07:09:33.849039] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:29.539 [2024-11-20 07:09:33.849047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:29.539 [2024-11-20 07:09:33.849058] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:29.539 [2024-11-20 07:09:33.849068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:29.539 [2024-11-20 07:09:33.849078] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:29.539 [2024-11-20 07:09:33.849085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:29.539 [2024-11-20 07:09:33.849095] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:29.539 [2024-11-20 07:09:33.849102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:29.539 [2024-11-20 07:09:33.849114] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:29.539 [2024-11-20 07:09:33.849118] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:29.539 [2024-11-20 07:09:33.849122] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:29.539 [2024-11-20 07:09:33.849125] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:29.539 [2024-11-20 07:09:33.849128] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:29.539 [2024-11-20 07:09:33.849133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:29.539 [2024-11-20 07:09:33.849140] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:29.539 [2024-11-20 07:09:33.849144] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:29.539 [2024-11-20 07:09:33.849147] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:29.539 [2024-11-20 07:09:33.849152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:29.539 [2024-11-20 07:09:33.849158] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:29.539 [2024-11-20 07:09:33.849162] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:29.539 [2024-11-20 07:09:33.849165] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:29.539 [2024-11-20 07:09:33.849171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:29.539 [2024-11-20 07:09:33.849177] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:29.539 [2024-11-20 07:09:33.849181] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:29.540 [2024-11-20 07:09:33.849184] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:29.540 [2024-11-20 07:09:33.849189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:29.540 [2024-11-20 07:09:33.849196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:29.540 [2024-11-20 07:09:33.849206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:29.540 [2024-11-20 07:09:33.849216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:29.540 [2024-11-20 07:09:33.849223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:29.540 ===================================================== 00:13:29.540 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:29.540 ===================================================== 00:13:29.540 Controller Capabilities/Features 00:13:29.540 ================================ 00:13:29.540 Vendor ID: 4e58 00:13:29.540 Subsystem Vendor ID: 4e58 00:13:29.540 Serial Number: SPDK1 00:13:29.540 Model Number: SPDK bdev Controller 00:13:29.540 Firmware Version: 25.01 00:13:29.540 Recommended Arb Burst: 6 00:13:29.540 IEEE OUI Identifier: 8d 6b 50 00:13:29.540 Multi-path I/O 00:13:29.540 May have multiple subsystem ports: Yes 00:13:29.540 May have multiple controllers: Yes 00:13:29.540 Associated with SR-IOV VF: No 00:13:29.540 Max Data Transfer Size: 131072 00:13:29.540 Max Number of Namespaces: 32 00:13:29.540 Max Number of I/O Queues: 127 00:13:29.540 NVMe Specification Version (VS): 1.3 00:13:29.540 NVMe Specification Version (Identify): 1.3 00:13:29.540 Maximum Queue Entries: 256 00:13:29.540 Contiguous Queues Required: Yes 00:13:29.540 Arbitration Mechanisms Supported 00:13:29.540 Weighted Round Robin: Not Supported 00:13:29.540 Vendor Specific: Not Supported 00:13:29.540 Reset Timeout: 15000 ms 00:13:29.540 Doorbell Stride: 4 bytes 00:13:29.540 NVM Subsystem Reset: Not Supported 00:13:29.540 Command Sets Supported 00:13:29.540 NVM Command Set: Supported 00:13:29.540 Boot Partition: Not Supported 00:13:29.540 Memory Page Size Minimum: 4096 bytes 00:13:29.540 Memory Page Size Maximum: 4096 bytes 00:13:29.540 Persistent Memory Region: Not Supported 00:13:29.540 Optional Asynchronous Events Supported 00:13:29.540 Namespace Attribute Notices: Supported 00:13:29.540 Firmware Activation Notices: Not Supported 00:13:29.540 ANA Change Notices: Not Supported 00:13:29.540 PLE Aggregate Log Change Notices: Not Supported 00:13:29.540 LBA Status Info Alert Notices: Not Supported 00:13:29.540 EGE Aggregate Log Change Notices: Not Supported 00:13:29.540 Normal NVM Subsystem Shutdown event: Not Supported 00:13:29.540 Zone Descriptor Change Notices: Not Supported 00:13:29.540 Discovery Log Change Notices: Not Supported 00:13:29.540 Controller Attributes 00:13:29.540 128-bit Host Identifier: Supported 00:13:29.540 Non-Operational Permissive Mode: Not Supported 00:13:29.540 NVM Sets: Not Supported 00:13:29.540 Read Recovery Levels: Not Supported 00:13:29.540 Endurance Groups: Not Supported 00:13:29.540 Predictable Latency Mode: Not Supported 00:13:29.540 Traffic Based Keep ALive: Not Supported 00:13:29.540 Namespace Granularity: Not Supported 00:13:29.540 SQ Associations: Not Supported 00:13:29.540 UUID List: Not Supported 00:13:29.540 Multi-Domain Subsystem: Not Supported 00:13:29.540 Fixed Capacity Management: Not Supported 00:13:29.540 Variable Capacity Management: Not Supported 00:13:29.540 Delete Endurance Group: Not Supported 00:13:29.540 Delete NVM Set: Not Supported 00:13:29.540 Extended LBA Formats Supported: Not Supported 00:13:29.540 Flexible Data Placement Supported: Not Supported 00:13:29.540 00:13:29.540 Controller Memory Buffer Support 00:13:29.540 ================================ 00:13:29.540 Supported: No 00:13:29.540 00:13:29.540 Persistent Memory Region Support 00:13:29.540 ================================ 00:13:29.540 Supported: No 00:13:29.540 00:13:29.540 Admin Command Set Attributes 00:13:29.540 ============================ 00:13:29.540 Security Send/Receive: Not Supported 00:13:29.540 Format NVM: Not Supported 00:13:29.540 Firmware Activate/Download: Not Supported 00:13:29.540 Namespace Management: Not Supported 00:13:29.540 Device Self-Test: Not Supported 00:13:29.540 Directives: Not Supported 00:13:29.540 NVMe-MI: Not Supported 00:13:29.540 Virtualization Management: Not Supported 00:13:29.540 Doorbell Buffer Config: Not Supported 00:13:29.540 Get LBA Status Capability: Not Supported 00:13:29.540 Command & Feature Lockdown Capability: Not Supported 00:13:29.540 Abort Command Limit: 4 00:13:29.540 Async Event Request Limit: 4 00:13:29.540 Number of Firmware Slots: N/A 00:13:29.540 Firmware Slot 1 Read-Only: N/A 00:13:29.540 Firmware Activation Without Reset: N/A 00:13:29.540 Multiple Update Detection Support: N/A 00:13:29.540 Firmware Update Granularity: No Information Provided 00:13:29.540 Per-Namespace SMART Log: No 00:13:29.540 Asymmetric Namespace Access Log Page: Not Supported 00:13:29.540 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:29.540 Command Effects Log Page: Supported 00:13:29.540 Get Log Page Extended Data: Supported 00:13:29.540 Telemetry Log Pages: Not Supported 00:13:29.540 Persistent Event Log Pages: Not Supported 00:13:29.540 Supported Log Pages Log Page: May Support 00:13:29.540 Commands Supported & Effects Log Page: Not Supported 00:13:29.540 Feature Identifiers & Effects Log Page:May Support 00:13:29.540 NVMe-MI Commands & Effects Log Page: May Support 00:13:29.540 Data Area 4 for Telemetry Log: Not Supported 00:13:29.540 Error Log Page Entries Supported: 128 00:13:29.540 Keep Alive: Supported 00:13:29.540 Keep Alive Granularity: 10000 ms 00:13:29.540 00:13:29.540 NVM Command Set Attributes 00:13:29.540 ========================== 00:13:29.540 Submission Queue Entry Size 00:13:29.540 Max: 64 00:13:29.540 Min: 64 00:13:29.540 Completion Queue Entry Size 00:13:29.540 Max: 16 00:13:29.540 Min: 16 00:13:29.540 Number of Namespaces: 32 00:13:29.540 Compare Command: Supported 00:13:29.540 Write Uncorrectable Command: Not Supported 00:13:29.540 Dataset Management Command: Supported 00:13:29.540 Write Zeroes Command: Supported 00:13:29.540 Set Features Save Field: Not Supported 00:13:29.540 Reservations: Not Supported 00:13:29.540 Timestamp: Not Supported 00:13:29.540 Copy: Supported 00:13:29.540 Volatile Write Cache: Present 00:13:29.540 Atomic Write Unit (Normal): 1 00:13:29.540 Atomic Write Unit (PFail): 1 00:13:29.540 Atomic Compare & Write Unit: 1 00:13:29.540 Fused Compare & Write: Supported 00:13:29.540 Scatter-Gather List 00:13:29.540 SGL Command Set: Supported (Dword aligned) 00:13:29.540 SGL Keyed: Not Supported 00:13:29.540 SGL Bit Bucket Descriptor: Not Supported 00:13:29.540 SGL Metadata Pointer: Not Supported 00:13:29.540 Oversized SGL: Not Supported 00:13:29.540 SGL Metadata Address: Not Supported 00:13:29.540 SGL Offset: Not Supported 00:13:29.540 Transport SGL Data Block: Not Supported 00:13:29.540 Replay Protected Memory Block: Not Supported 00:13:29.540 00:13:29.540 Firmware Slot Information 00:13:29.540 ========================= 00:13:29.540 Active slot: 1 00:13:29.540 Slot 1 Firmware Revision: 25.01 00:13:29.540 00:13:29.540 00:13:29.540 Commands Supported and Effects 00:13:29.540 ============================== 00:13:29.540 Admin Commands 00:13:29.540 -------------- 00:13:29.540 Get Log Page (02h): Supported 00:13:29.540 Identify (06h): Supported 00:13:29.540 Abort (08h): Supported 00:13:29.540 Set Features (09h): Supported 00:13:29.540 Get Features (0Ah): Supported 00:13:29.540 Asynchronous Event Request (0Ch): Supported 00:13:29.540 Keep Alive (18h): Supported 00:13:29.540 I/O Commands 00:13:29.540 ------------ 00:13:29.540 Flush (00h): Supported LBA-Change 00:13:29.540 Write (01h): Supported LBA-Change 00:13:29.540 Read (02h): Supported 00:13:29.540 Compare (05h): Supported 00:13:29.540 Write Zeroes (08h): Supported LBA-Change 00:13:29.540 Dataset Management (09h): Supported LBA-Change 00:13:29.540 Copy (19h): Supported LBA-Change 00:13:29.540 00:13:29.540 Error Log 00:13:29.540 ========= 00:13:29.540 00:13:29.540 Arbitration 00:13:29.540 =========== 00:13:29.540 Arbitration Burst: 1 00:13:29.540 00:13:29.540 Power Management 00:13:29.540 ================ 00:13:29.540 Number of Power States: 1 00:13:29.540 Current Power State: Power State #0 00:13:29.540 Power State #0: 00:13:29.540 Max Power: 0.00 W 00:13:29.540 Non-Operational State: Operational 00:13:29.540 Entry Latency: Not Reported 00:13:29.540 Exit Latency: Not Reported 00:13:29.541 Relative Read Throughput: 0 00:13:29.541 Relative Read Latency: 0 00:13:29.541 Relative Write Throughput: 0 00:13:29.541 Relative Write Latency: 0 00:13:29.541 Idle Power: Not Reported 00:13:29.541 Active Power: Not Reported 00:13:29.541 Non-Operational Permissive Mode: Not Supported 00:13:29.541 00:13:29.541 Health Information 00:13:29.541 ================== 00:13:29.541 Critical Warnings: 00:13:29.541 Available Spare Space: OK 00:13:29.541 Temperature: OK 00:13:29.541 Device Reliability: OK 00:13:29.541 Read Only: No 00:13:29.541 Volatile Memory Backup: OK 00:13:29.541 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:29.541 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:29.541 Available Spare: 0% 00:13:29.541 Available Sp[2024-11-20 07:09:33.849308] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:29.541 [2024-11-20 07:09:33.849316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:29.541 [2024-11-20 07:09:33.849339] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:13:29.541 [2024-11-20 07:09:33.849348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:29.541 [2024-11-20 07:09:33.849354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:29.541 [2024-11-20 07:09:33.849359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:29.541 [2024-11-20 07:09:33.849366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:29.541 [2024-11-20 07:09:33.849496] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:29.541 [2024-11-20 07:09:33.849506] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:29.541 [2024-11-20 07:09:33.850500] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:29.541 [2024-11-20 07:09:33.850551] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:13:29.541 [2024-11-20 07:09:33.850557] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:13:29.541 [2024-11-20 07:09:33.851502] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:29.541 [2024-11-20 07:09:33.851512] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:13:29.541 [2024-11-20 07:09:33.851562] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:29.541 [2024-11-20 07:09:33.856954] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:29.541 are Threshold: 0% 00:13:29.541 Life Percentage Used: 0% 00:13:29.541 Data Units Read: 0 00:13:29.541 Data Units Written: 0 00:13:29.541 Host Read Commands: 0 00:13:29.541 Host Write Commands: 0 00:13:29.541 Controller Busy Time: 0 minutes 00:13:29.541 Power Cycles: 0 00:13:29.541 Power On Hours: 0 hours 00:13:29.541 Unsafe Shutdowns: 0 00:13:29.541 Unrecoverable Media Errors: 0 00:13:29.541 Lifetime Error Log Entries: 0 00:13:29.541 Warning Temperature Time: 0 minutes 00:13:29.541 Critical Temperature Time: 0 minutes 00:13:29.541 00:13:29.541 Number of Queues 00:13:29.541 ================ 00:13:29.541 Number of I/O Submission Queues: 127 00:13:29.541 Number of I/O Completion Queues: 127 00:13:29.541 00:13:29.541 Active Namespaces 00:13:29.541 ================= 00:13:29.541 Namespace ID:1 00:13:29.541 Error Recovery Timeout: Unlimited 00:13:29.541 Command Set Identifier: NVM (00h) 00:13:29.541 Deallocate: Supported 00:13:29.541 Deallocated/Unwritten Error: Not Supported 00:13:29.541 Deallocated Read Value: Unknown 00:13:29.541 Deallocate in Write Zeroes: Not Supported 00:13:29.541 Deallocated Guard Field: 0xFFFF 00:13:29.541 Flush: Supported 00:13:29.541 Reservation: Supported 00:13:29.541 Namespace Sharing Capabilities: Multiple Controllers 00:13:29.541 Size (in LBAs): 131072 (0GiB) 00:13:29.541 Capacity (in LBAs): 131072 (0GiB) 00:13:29.541 Utilization (in LBAs): 131072 (0GiB) 00:13:29.541 NGUID: D4C4EECB654043B6A117CE89139BDECE 00:13:29.541 UUID: d4c4eecb-6540-43b6-a117-ce89139bdece 00:13:29.541 Thin Provisioning: Not Supported 00:13:29.541 Per-NS Atomic Units: Yes 00:13:29.541 Atomic Boundary Size (Normal): 0 00:13:29.541 Atomic Boundary Size (PFail): 0 00:13:29.541 Atomic Boundary Offset: 0 00:13:29.541 Maximum Single Source Range Length: 65535 00:13:29.541 Maximum Copy Length: 65535 00:13:29.541 Maximum Source Range Count: 1 00:13:29.541 NGUID/EUI64 Never Reused: No 00:13:29.541 Namespace Write Protected: No 00:13:29.541 Number of LBA Formats: 1 00:13:29.541 Current LBA Format: LBA Format #00 00:13:29.541 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:29.541 00:13:29.541 07:09:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:29.800 [2024-11-20 07:09:34.087517] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:35.074 Initializing NVMe Controllers 00:13:35.074 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:35.074 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:35.074 Initialization complete. Launching workers. 00:13:35.074 ======================================================== 00:13:35.074 Latency(us) 00:13:35.074 Device Information : IOPS MiB/s Average min max 00:13:35.074 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39962.17 156.10 3202.82 947.21 8121.58 00:13:35.074 ======================================================== 00:13:35.074 Total : 39962.17 156.10 3202.82 947.21 8121.58 00:13:35.074 00:13:35.074 [2024-11-20 07:09:39.104973] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:35.074 07:09:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:35.074 [2024-11-20 07:09:39.343070] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:40.346 Initializing NVMe Controllers 00:13:40.347 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:40.347 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:40.347 Initialization complete. Launching workers. 00:13:40.347 ======================================================== 00:13:40.347 Latency(us) 00:13:40.347 Device Information : IOPS MiB/s Average min max 00:13:40.347 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16058.01 62.73 7976.43 5628.81 15488.01 00:13:40.347 ======================================================== 00:13:40.347 Total : 16058.01 62.73 7976.43 5628.81 15488.01 00:13:40.347 00:13:40.347 [2024-11-20 07:09:44.382481] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:40.347 07:09:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:40.347 [2024-11-20 07:09:44.585448] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:45.620 [2024-11-20 07:09:49.667295] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:45.620 Initializing NVMe Controllers 00:13:45.620 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:45.620 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:45.620 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:45.620 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:45.620 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:45.620 Initialization complete. Launching workers. 00:13:45.620 Starting thread on core 2 00:13:45.620 Starting thread on core 3 00:13:45.620 Starting thread on core 1 00:13:45.620 07:09:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:45.620 [2024-11-20 07:09:49.964322] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:48.910 [2024-11-20 07:09:53.035031] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:48.910 Initializing NVMe Controllers 00:13:48.910 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:48.910 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:48.910 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:48.910 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:48.910 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:48.910 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:48.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:48.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:48.910 Initialization complete. Launching workers. 00:13:48.910 Starting thread on core 1 with urgent priority queue 00:13:48.910 Starting thread on core 2 with urgent priority queue 00:13:48.910 Starting thread on core 3 with urgent priority queue 00:13:48.910 Starting thread on core 0 with urgent priority queue 00:13:48.910 SPDK bdev Controller (SPDK1 ) core 0: 8258.33 IO/s 12.11 secs/100000 ios 00:13:48.910 SPDK bdev Controller (SPDK1 ) core 1: 8185.67 IO/s 12.22 secs/100000 ios 00:13:48.910 SPDK bdev Controller (SPDK1 ) core 2: 9270.33 IO/s 10.79 secs/100000 ios 00:13:48.910 SPDK bdev Controller (SPDK1 ) core 3: 10404.33 IO/s 9.61 secs/100000 ios 00:13:48.910 ======================================================== 00:13:48.910 00:13:48.910 07:09:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:48.910 [2024-11-20 07:09:53.323276] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:48.910 Initializing NVMe Controllers 00:13:48.910 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:48.910 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:48.910 Namespace ID: 1 size: 0GB 00:13:48.910 Initialization complete. 00:13:48.910 INFO: using host memory buffer for IO 00:13:48.910 Hello world! 00:13:48.911 [2024-11-20 07:09:53.357497] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:48.911 07:09:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:49.169 [2024-11-20 07:09:53.657410] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:50.548 Initializing NVMe Controllers 00:13:50.548 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:50.548 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:50.548 Initialization complete. Launching workers. 00:13:50.548 submit (in ns) avg, min, max = 8890.7, 3196.5, 3999609.6 00:13:50.548 complete (in ns) avg, min, max = 20169.6, 1788.7, 7985827.8 00:13:50.548 00:13:50.549 Submit histogram 00:13:50.549 ================ 00:13:50.549 Range in us Cumulative Count 00:13:50.549 3.186 - 3.200: 0.0062% ( 1) 00:13:50.549 3.214 - 3.228: 0.0309% ( 4) 00:13:50.549 3.228 - 3.242: 0.0928% ( 10) 00:13:50.549 3.242 - 3.256: 0.1113% ( 3) 00:13:50.549 3.256 - 3.270: 0.1793% ( 11) 00:13:50.549 3.270 - 3.283: 0.4328% ( 41) 00:13:50.549 3.283 - 3.297: 1.9911% ( 252) 00:13:50.549 3.297 - 3.311: 6.0599% ( 658) 00:13:50.549 3.311 - 3.325: 11.0252% ( 803) 00:13:50.549 3.325 - 3.339: 15.9288% ( 793) 00:13:50.549 3.339 - 3.353: 22.6379% ( 1085) 00:13:50.549 3.353 - 3.367: 29.1553% ( 1054) 00:13:50.549 3.367 - 3.381: 34.3866% ( 846) 00:13:50.549 3.381 - 3.395: 40.3351% ( 962) 00:13:50.549 3.395 - 3.409: 44.8800% ( 735) 00:13:50.549 3.409 - 3.423: 49.2271% ( 703) 00:13:50.549 3.423 - 3.437: 53.6916% ( 722) 00:13:50.549 3.437 - 3.450: 60.5244% ( 1105) 00:13:50.549 3.450 - 3.464: 66.2070% ( 919) 00:13:50.549 3.464 - 3.478: 69.9790% ( 610) 00:13:50.549 3.478 - 3.492: 75.4452% ( 884) 00:13:50.549 3.492 - 3.506: 80.1261% ( 757) 00:13:50.549 3.506 - 3.520: 83.0324% ( 470) 00:13:50.549 3.520 - 3.534: 85.2276% ( 355) 00:13:50.549 3.534 - 3.548: 86.3406% ( 180) 00:13:50.549 3.548 - 3.562: 87.0455% ( 114) 00:13:50.549 3.562 - 3.590: 87.8617% ( 132) 00:13:50.549 3.590 - 3.617: 89.3520% ( 241) 00:13:50.549 3.617 - 3.645: 91.2070% ( 300) 00:13:50.549 3.645 - 3.673: 92.7900% ( 256) 00:13:50.549 3.673 - 3.701: 94.6389% ( 299) 00:13:50.549 3.701 - 3.729: 96.1167% ( 239) 00:13:50.549 3.729 - 3.757: 97.6503% ( 248) 00:13:50.549 3.757 - 3.784: 98.4603% ( 131) 00:13:50.549 3.784 - 3.812: 98.9241% ( 75) 00:13:50.549 3.812 - 3.840: 99.2703% ( 56) 00:13:50.549 3.840 - 3.868: 99.4744% ( 33) 00:13:50.549 3.868 - 3.896: 99.5301% ( 9) 00:13:50.549 3.896 - 3.923: 99.5424% ( 2) 00:13:50.549 3.923 - 3.951: 99.5486% ( 1) 00:13:50.549 4.063 - 4.090: 99.5548% ( 1) 00:13:50.549 5.482 - 5.510: 99.5610% ( 1) 00:13:50.549 5.565 - 5.593: 99.5672% ( 1) 00:13:50.549 5.816 - 5.843: 99.5733% ( 1) 00:13:50.549 6.261 - 6.289: 99.5795% ( 1) 00:13:50.549 6.289 - 6.317: 99.5857% ( 1) 00:13:50.549 6.344 - 6.372: 99.5919% ( 1) 00:13:50.549 6.400 - 6.428: 99.5981% ( 1) 00:13:50.549 6.456 - 6.483: 99.6043% ( 1) 00:13:50.549 6.567 - 6.595: 99.6104% ( 1) 00:13:50.549 6.650 - 6.678: 99.6166% ( 1) 00:13:50.549 6.678 - 6.706: 99.6290% ( 2) 00:13:50.549 6.790 - 6.817: 99.6414% ( 2) 00:13:50.549 6.873 - 6.901: 99.6475% ( 1) 00:13:50.549 6.901 - 6.929: 99.6599% ( 2) 00:13:50.549 6.984 - 7.012: 99.6723% ( 2) 00:13:50.549 7.012 - 7.040: 99.6785% ( 1) 00:13:50.549 7.123 - 7.179: 99.6908% ( 2) 00:13:50.549 7.235 - 7.290: 99.6970% ( 1) 00:13:50.549 7.290 - 7.346: 99.7032% ( 1) 00:13:50.549 7.402 - 7.457: 99.7094% ( 1) 00:13:50.549 7.457 - 7.513: 99.7341% ( 4) 00:13:50.549 7.513 - 7.569: 99.7403% ( 1) 00:13:50.549 7.847 - 7.903: 99.7588% ( 3) 00:13:50.549 7.903 - 7.958: 99.7650% ( 1) 00:13:50.549 7.958 - 8.014: 99.7836% ( 3) 00:13:50.549 8.014 - 8.070: 99.7898% ( 1) 00:13:50.549 8.237 - 8.292: 99.8021% ( 2) 00:13:50.549 8.292 - 8.348: 99.8083% ( 1) 00:13:50.549 8.515 - 8.570: 99.8145% ( 1) 00:13:50.549 8.626 - 8.682: 99.8207% ( 1) 00:13:50.549 8.682 - 8.737: 99.8269% ( 1) 00:13:50.549 8.737 - 8.793: 99.8330% ( 1) 00:13:50.549 8.849 - 8.904: 99.8392% ( 1) 00:13:50.549 8.960 - 9.016: 99.8454% ( 1) 00:13:50.549 9.071 - 9.127: 99.8516% ( 1) 00:13:50.549 19.144 - 19.256: 99.8578% ( 1) 00:13:50.549 26.824 - 26.936: 99.8640% ( 1) 00:13:50.549 3989.148 - 4017.642: 100.0000% ( 22) 00:13:50.549 00:13:50.549 [2024-11-20 07:09:54.673374] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:50.549 Complete histogram 00:13:50.549 ================== 00:13:50.549 Range in us Cumulative Count 00:13:50.549 1.781 - 1.795: 0.0062% ( 1) 00:13:50.549 1.795 - 1.809: 0.0124% ( 1) 00:13:50.549 1.809 - 1.823: 0.3092% ( 48) 00:13:50.549 1.823 - 1.837: 1.3604% ( 170) 00:13:50.549 1.837 - 1.850: 2.8321% ( 238) 00:13:50.549 1.850 - 1.864: 5.2745% ( 395) 00:13:50.549 1.864 - 1.878: 41.2812% ( 5823) 00:13:50.549 1.878 - 1.892: 82.7789% ( 6711) 00:13:50.549 1.892 - 1.906: 92.5303% ( 1577) 00:13:50.549 1.906 - 1.920: 96.3208% ( 613) 00:13:50.549 1.920 - 1.934: 97.0937% ( 125) 00:13:50.549 1.934 - 1.948: 97.8729% ( 126) 00:13:50.549 1.948 - 1.962: 98.7262% ( 138) 00:13:50.549 1.962 - 1.976: 99.1900% ( 75) 00:13:50.549 1.976 - 1.990: 99.2703% ( 13) 00:13:50.549 1.990 - 2.003: 99.2951% ( 4) 00:13:50.549 2.003 - 2.017: 99.3013% ( 1) 00:13:50.549 2.017 - 2.031: 99.3136% ( 2) 00:13:50.549 2.031 - 2.045: 99.3322% ( 3) 00:13:50.549 2.226 - 2.240: 99.3384% ( 1) 00:13:50.549 2.282 - 2.296: 99.3445% ( 1) 00:13:50.549 3.840 - 3.868: 99.3507% ( 1) 00:13:50.549 3.868 - 3.896: 99.3569% ( 1) 00:13:50.549 3.923 - 3.951: 99.3631% ( 1) 00:13:50.549 3.951 - 3.979: 99.3693% ( 1) 00:13:50.549 4.063 - 4.090: 99.3755% ( 1) 00:13:50.549 4.202 - 4.230: 99.3816% ( 1) 00:13:50.549 4.341 - 4.369: 99.3878% ( 1) 00:13:50.549 4.536 - 4.563: 99.3940% ( 1) 00:13:50.549 4.591 - 4.619: 99.4002% ( 1) 00:13:50.549 4.786 - 4.814: 99.4064% ( 1) 00:13:50.549 4.897 - 4.925: 99.4126% ( 1) 00:13:50.549 5.037 - 5.064: 99.4187% ( 1) 00:13:50.549 5.064 - 5.092: 99.4249% ( 1) 00:13:50.549 5.259 - 5.287: 99.4373% ( 2) 00:13:50.549 5.287 - 5.315: 99.4435% ( 1) 00:13:50.549 5.537 - 5.565: 99.4497% ( 1) 00:13:50.549 5.621 - 5.649: 99.4558% ( 1) 00:13:50.549 5.649 - 5.677: 99.4620% ( 1) 00:13:50.549 5.732 - 5.760: 99.4744% ( 2) 00:13:50.549 5.760 - 5.788: 99.4806% ( 1) 00:13:50.549 6.010 - 6.038: 99.4868% ( 1) 00:13:50.549 6.177 - 6.205: 99.4930% ( 1) 00:13:50.549 6.289 - 6.317: 99.4991% ( 1) 00:13:50.549 6.344 - 6.372: 99.5053% ( 1) 00:13:50.549 6.595 - 6.623: 99.5115% ( 1) 00:13:50.549 6.623 - 6.650: 99.5177% ( 1) 00:13:50.549 6.790 - 6.817: 99.5239% ( 1) 00:13:50.549 6.901 - 6.929: 99.5301% ( 1) 00:13:50.549 7.290 - 7.346: 99.5362% ( 1) 00:13:50.549 10.685 - 10.741: 99.5424% ( 1) 00:13:50.549 11.464 - 11.520: 99.5486% ( 1) 00:13:50.549 3989.148 - 4017.642: 99.9938% ( 72) 00:13:50.549 7978.296 - 8035.283: 100.0000% ( 1) 00:13:50.549 00:13:50.549 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:50.549 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:50.549 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:50.549 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:50.549 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:50.549 [ 00:13:50.549 { 00:13:50.549 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:50.549 "subtype": "Discovery", 00:13:50.549 "listen_addresses": [], 00:13:50.549 "allow_any_host": true, 00:13:50.549 "hosts": [] 00:13:50.549 }, 00:13:50.549 { 00:13:50.549 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:50.549 "subtype": "NVMe", 00:13:50.549 "listen_addresses": [ 00:13:50.550 { 00:13:50.550 "trtype": "VFIOUSER", 00:13:50.550 "adrfam": "IPv4", 00:13:50.550 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:50.550 "trsvcid": "0" 00:13:50.550 } 00:13:50.550 ], 00:13:50.550 "allow_any_host": true, 00:13:50.550 "hosts": [], 00:13:50.550 "serial_number": "SPDK1", 00:13:50.550 "model_number": "SPDK bdev Controller", 00:13:50.550 "max_namespaces": 32, 00:13:50.550 "min_cntlid": 1, 00:13:50.550 "max_cntlid": 65519, 00:13:50.550 "namespaces": [ 00:13:50.550 { 00:13:50.550 "nsid": 1, 00:13:50.550 "bdev_name": "Malloc1", 00:13:50.550 "name": "Malloc1", 00:13:50.550 "nguid": "D4C4EECB654043B6A117CE89139BDECE", 00:13:50.550 "uuid": "d4c4eecb-6540-43b6-a117-ce89139bdece" 00:13:50.550 } 00:13:50.550 ] 00:13:50.550 }, 00:13:50.550 { 00:13:50.550 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:50.550 "subtype": "NVMe", 00:13:50.550 "listen_addresses": [ 00:13:50.550 { 00:13:50.550 "trtype": "VFIOUSER", 00:13:50.550 "adrfam": "IPv4", 00:13:50.550 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:50.550 "trsvcid": "0" 00:13:50.550 } 00:13:50.550 ], 00:13:50.550 "allow_any_host": true, 00:13:50.550 "hosts": [], 00:13:50.550 "serial_number": "SPDK2", 00:13:50.550 "model_number": "SPDK bdev Controller", 00:13:50.550 "max_namespaces": 32, 00:13:50.550 "min_cntlid": 1, 00:13:50.550 "max_cntlid": 65519, 00:13:50.550 "namespaces": [ 00:13:50.550 { 00:13:50.550 "nsid": 1, 00:13:50.550 "bdev_name": "Malloc2", 00:13:50.550 "name": "Malloc2", 00:13:50.550 "nguid": "8C117A1DA5FE405285523C94AFFB9414", 00:13:50.550 "uuid": "8c117a1d-a5fe-4052-8552-3c94affb9414" 00:13:50.550 } 00:13:50.550 ] 00:13:50.550 } 00:13:50.550 ] 00:13:50.550 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:50.550 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1156472 00:13:50.550 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:50.550 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:50.550 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:13:50.550 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:50.550 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:50.550 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:13:50.550 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:50.550 07:09:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:50.550 [2024-11-20 07:09:55.076343] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:50.809 Malloc3 00:13:50.809 07:09:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:50.809 [2024-11-20 07:09:55.310191] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:50.809 07:09:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:50.809 Asynchronous Event Request test 00:13:50.809 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:50.809 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:50.809 Registering asynchronous event callbacks... 00:13:50.809 Starting namespace attribute notice tests for all controllers... 00:13:50.809 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:50.809 aer_cb - Changed Namespace 00:13:50.809 Cleaning up... 00:13:51.068 [ 00:13:51.068 { 00:13:51.068 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:51.068 "subtype": "Discovery", 00:13:51.068 "listen_addresses": [], 00:13:51.068 "allow_any_host": true, 00:13:51.068 "hosts": [] 00:13:51.068 }, 00:13:51.068 { 00:13:51.068 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:51.068 "subtype": "NVMe", 00:13:51.068 "listen_addresses": [ 00:13:51.068 { 00:13:51.068 "trtype": "VFIOUSER", 00:13:51.069 "adrfam": "IPv4", 00:13:51.069 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:51.069 "trsvcid": "0" 00:13:51.069 } 00:13:51.069 ], 00:13:51.069 "allow_any_host": true, 00:13:51.069 "hosts": [], 00:13:51.069 "serial_number": "SPDK1", 00:13:51.069 "model_number": "SPDK bdev Controller", 00:13:51.069 "max_namespaces": 32, 00:13:51.069 "min_cntlid": 1, 00:13:51.069 "max_cntlid": 65519, 00:13:51.069 "namespaces": [ 00:13:51.069 { 00:13:51.069 "nsid": 1, 00:13:51.069 "bdev_name": "Malloc1", 00:13:51.069 "name": "Malloc1", 00:13:51.069 "nguid": "D4C4EECB654043B6A117CE89139BDECE", 00:13:51.069 "uuid": "d4c4eecb-6540-43b6-a117-ce89139bdece" 00:13:51.069 }, 00:13:51.069 { 00:13:51.069 "nsid": 2, 00:13:51.069 "bdev_name": "Malloc3", 00:13:51.069 "name": "Malloc3", 00:13:51.069 "nguid": "E024DED3B35C442F86DDD6EB08D803FA", 00:13:51.069 "uuid": "e024ded3-b35c-442f-86dd-d6eb08d803fa" 00:13:51.069 } 00:13:51.069 ] 00:13:51.069 }, 00:13:51.069 { 00:13:51.069 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:51.069 "subtype": "NVMe", 00:13:51.069 "listen_addresses": [ 00:13:51.069 { 00:13:51.069 "trtype": "VFIOUSER", 00:13:51.069 "adrfam": "IPv4", 00:13:51.069 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:51.069 "trsvcid": "0" 00:13:51.069 } 00:13:51.069 ], 00:13:51.069 "allow_any_host": true, 00:13:51.069 "hosts": [], 00:13:51.069 "serial_number": "SPDK2", 00:13:51.069 "model_number": "SPDK bdev Controller", 00:13:51.069 "max_namespaces": 32, 00:13:51.069 "min_cntlid": 1, 00:13:51.069 "max_cntlid": 65519, 00:13:51.069 "namespaces": [ 00:13:51.069 { 00:13:51.069 "nsid": 1, 00:13:51.069 "bdev_name": "Malloc2", 00:13:51.069 "name": "Malloc2", 00:13:51.069 "nguid": "8C117A1DA5FE405285523C94AFFB9414", 00:13:51.069 "uuid": "8c117a1d-a5fe-4052-8552-3c94affb9414" 00:13:51.069 } 00:13:51.069 ] 00:13:51.069 } 00:13:51.069 ] 00:13:51.069 07:09:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1156472 00:13:51.069 07:09:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:51.069 07:09:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:51.069 07:09:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:51.069 07:09:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:51.069 [2024-11-20 07:09:55.571908] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:13:51.069 [2024-11-20 07:09:55.571955] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1156682 ] 00:13:51.069 [2024-11-20 07:09:55.612750] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:51.330 [2024-11-20 07:09:55.621194] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:51.330 [2024-11-20 07:09:55.621218] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fe7986d3000 00:13:51.330 [2024-11-20 07:09:55.622201] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:51.330 [2024-11-20 07:09:55.623204] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:51.330 [2024-11-20 07:09:55.624208] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:51.330 [2024-11-20 07:09:55.625221] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:51.330 [2024-11-20 07:09:55.626229] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:51.330 [2024-11-20 07:09:55.627233] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:51.330 [2024-11-20 07:09:55.628242] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:51.330 [2024-11-20 07:09:55.629250] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:51.330 [2024-11-20 07:09:55.630256] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:51.330 [2024-11-20 07:09:55.630267] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fe7986c8000 00:13:51.330 [2024-11-20 07:09:55.631210] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:51.330 [2024-11-20 07:09:55.640739] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:51.330 [2024-11-20 07:09:55.640763] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:13:51.330 [2024-11-20 07:09:55.645855] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:51.330 [2024-11-20 07:09:55.645894] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:51.330 [2024-11-20 07:09:55.645967] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:13:51.330 [2024-11-20 07:09:55.645981] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:13:51.330 [2024-11-20 07:09:55.645994] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:13:51.330 [2024-11-20 07:09:55.646863] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:51.330 [2024-11-20 07:09:55.646875] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:13:51.330 [2024-11-20 07:09:55.646882] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:13:51.330 [2024-11-20 07:09:55.647865] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:51.330 [2024-11-20 07:09:55.647873] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:13:51.330 [2024-11-20 07:09:55.647880] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:51.330 [2024-11-20 07:09:55.648875] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:51.330 [2024-11-20 07:09:55.648884] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:51.330 [2024-11-20 07:09:55.649880] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:51.330 [2024-11-20 07:09:55.649889] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:51.330 [2024-11-20 07:09:55.649894] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:51.330 [2024-11-20 07:09:55.649900] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:51.330 [2024-11-20 07:09:55.650008] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:13:51.330 [2024-11-20 07:09:55.650014] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:51.330 [2024-11-20 07:09:55.650019] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:51.330 [2024-11-20 07:09:55.650887] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:51.330 [2024-11-20 07:09:55.651893] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:51.330 [2024-11-20 07:09:55.652901] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:51.330 [2024-11-20 07:09:55.653899] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:51.330 [2024-11-20 07:09:55.653939] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:51.330 [2024-11-20 07:09:55.654911] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:51.330 [2024-11-20 07:09:55.654920] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:51.330 [2024-11-20 07:09:55.654924] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:51.330 [2024-11-20 07:09:55.654942] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:13:51.330 [2024-11-20 07:09:55.654954] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:51.330 [2024-11-20 07:09:55.654968] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:51.330 [2024-11-20 07:09:55.654973] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:51.330 [2024-11-20 07:09:55.654976] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:51.330 [2024-11-20 07:09:55.654988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:51.330 [2024-11-20 07:09:55.662958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:51.330 [2024-11-20 07:09:55.662971] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:13:51.330 [2024-11-20 07:09:55.662976] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:13:51.330 [2024-11-20 07:09:55.662980] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:13:51.330 [2024-11-20 07:09:55.662985] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:51.330 [2024-11-20 07:09:55.662992] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:13:51.331 [2024-11-20 07:09:55.662997] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:13:51.331 [2024-11-20 07:09:55.663001] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:13:51.331 [2024-11-20 07:09:55.663010] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:51.331 [2024-11-20 07:09:55.663020] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:51.331 [2024-11-20 07:09:55.670954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:51.331 [2024-11-20 07:09:55.670968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:51.331 [2024-11-20 07:09:55.670976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:51.331 [2024-11-20 07:09:55.670985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:51.331 [2024-11-20 07:09:55.670992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:51.331 [2024-11-20 07:09:55.670997] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:51.331 [2024-11-20 07:09:55.671003] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:51.331 [2024-11-20 07:09:55.671012] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:51.331 [2024-11-20 07:09:55.678675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:51.331 [2024-11-20 07:09:55.678686] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:13:51.331 [2024-11-20 07:09:55.678691] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:51.331 [2024-11-20 07:09:55.678698] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:13:51.331 [2024-11-20 07:09:55.678705] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:51.331 [2024-11-20 07:09:55.678713] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:51.331 [2024-11-20 07:09:55.689953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:51.331 [2024-11-20 07:09:55.690009] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:13:51.331 [2024-11-20 07:09:55.690016] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:51.331 [2024-11-20 07:09:55.690023] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:51.331 [2024-11-20 07:09:55.690028] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:51.331 [2024-11-20 07:09:55.690031] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:51.331 [2024-11-20 07:09:55.690037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:51.331 [2024-11-20 07:09:55.697954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:51.331 [2024-11-20 07:09:55.697965] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:13:51.331 [2024-11-20 07:09:55.697978] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:13:51.331 [2024-11-20 07:09:55.697985] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:51.331 [2024-11-20 07:09:55.697991] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:51.331 [2024-11-20 07:09:55.697995] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:51.331 [2024-11-20 07:09:55.697999] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:51.331 [2024-11-20 07:09:55.698004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:51.331 [2024-11-20 07:09:55.705953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:51.331 [2024-11-20 07:09:55.705967] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:51.331 [2024-11-20 07:09:55.705975] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:51.331 [2024-11-20 07:09:55.705981] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:51.331 [2024-11-20 07:09:55.705985] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:51.331 [2024-11-20 07:09:55.705988] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:51.331 [2024-11-20 07:09:55.705994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:51.331 [2024-11-20 07:09:55.713953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:51.331 [2024-11-20 07:09:55.713962] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:51.331 [2024-11-20 07:09:55.713971] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:51.331 [2024-11-20 07:09:55.713978] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:13:51.331 [2024-11-20 07:09:55.713984] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:51.331 [2024-11-20 07:09:55.713988] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:51.331 [2024-11-20 07:09:55.713993] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:13:51.331 [2024-11-20 07:09:55.713998] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:51.331 [2024-11-20 07:09:55.714002] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:13:51.331 [2024-11-20 07:09:55.714007] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:13:51.331 [2024-11-20 07:09:55.714021] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:51.331 [2024-11-20 07:09:55.721952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:51.331 [2024-11-20 07:09:55.721965] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:51.331 [2024-11-20 07:09:55.729953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:51.331 [2024-11-20 07:09:55.729965] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:51.331 [2024-11-20 07:09:55.737953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:51.331 [2024-11-20 07:09:55.737965] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:51.331 [2024-11-20 07:09:55.745951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:51.331 [2024-11-20 07:09:55.745966] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:51.331 [2024-11-20 07:09:55.745971] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:51.331 [2024-11-20 07:09:55.745974] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:51.331 [2024-11-20 07:09:55.745977] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:51.331 [2024-11-20 07:09:55.745980] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:51.331 [2024-11-20 07:09:55.745986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:51.331 [2024-11-20 07:09:55.745993] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:51.331 [2024-11-20 07:09:55.745997] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:51.331 [2024-11-20 07:09:55.746000] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:51.331 [2024-11-20 07:09:55.746006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:51.331 [2024-11-20 07:09:55.746014] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:51.331 [2024-11-20 07:09:55.746018] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:51.331 [2024-11-20 07:09:55.746021] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:51.331 [2024-11-20 07:09:55.746027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:51.331 [2024-11-20 07:09:55.746033] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:51.331 [2024-11-20 07:09:55.746037] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:51.331 [2024-11-20 07:09:55.746041] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:51.331 [2024-11-20 07:09:55.746046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:51.331 [2024-11-20 07:09:55.753952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:51.331 [2024-11-20 07:09:55.753974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:51.331 [2024-11-20 07:09:55.753985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:51.331 [2024-11-20 07:09:55.753991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:51.331 ===================================================== 00:13:51.331 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:51.331 ===================================================== 00:13:51.332 Controller Capabilities/Features 00:13:51.332 ================================ 00:13:51.332 Vendor ID: 4e58 00:13:51.332 Subsystem Vendor ID: 4e58 00:13:51.332 Serial Number: SPDK2 00:13:51.332 Model Number: SPDK bdev Controller 00:13:51.332 Firmware Version: 25.01 00:13:51.332 Recommended Arb Burst: 6 00:13:51.332 IEEE OUI Identifier: 8d 6b 50 00:13:51.332 Multi-path I/O 00:13:51.332 May have multiple subsystem ports: Yes 00:13:51.332 May have multiple controllers: Yes 00:13:51.332 Associated with SR-IOV VF: No 00:13:51.332 Max Data Transfer Size: 131072 00:13:51.332 Max Number of Namespaces: 32 00:13:51.332 Max Number of I/O Queues: 127 00:13:51.332 NVMe Specification Version (VS): 1.3 00:13:51.332 NVMe Specification Version (Identify): 1.3 00:13:51.332 Maximum Queue Entries: 256 00:13:51.332 Contiguous Queues Required: Yes 00:13:51.332 Arbitration Mechanisms Supported 00:13:51.332 Weighted Round Robin: Not Supported 00:13:51.332 Vendor Specific: Not Supported 00:13:51.332 Reset Timeout: 15000 ms 00:13:51.332 Doorbell Stride: 4 bytes 00:13:51.332 NVM Subsystem Reset: Not Supported 00:13:51.332 Command Sets Supported 00:13:51.332 NVM Command Set: Supported 00:13:51.332 Boot Partition: Not Supported 00:13:51.332 Memory Page Size Minimum: 4096 bytes 00:13:51.332 Memory Page Size Maximum: 4096 bytes 00:13:51.332 Persistent Memory Region: Not Supported 00:13:51.332 Optional Asynchronous Events Supported 00:13:51.332 Namespace Attribute Notices: Supported 00:13:51.332 Firmware Activation Notices: Not Supported 00:13:51.332 ANA Change Notices: Not Supported 00:13:51.332 PLE Aggregate Log Change Notices: Not Supported 00:13:51.332 LBA Status Info Alert Notices: Not Supported 00:13:51.332 EGE Aggregate Log Change Notices: Not Supported 00:13:51.332 Normal NVM Subsystem Shutdown event: Not Supported 00:13:51.332 Zone Descriptor Change Notices: Not Supported 00:13:51.332 Discovery Log Change Notices: Not Supported 00:13:51.332 Controller Attributes 00:13:51.332 128-bit Host Identifier: Supported 00:13:51.332 Non-Operational Permissive Mode: Not Supported 00:13:51.332 NVM Sets: Not Supported 00:13:51.332 Read Recovery Levels: Not Supported 00:13:51.332 Endurance Groups: Not Supported 00:13:51.332 Predictable Latency Mode: Not Supported 00:13:51.332 Traffic Based Keep ALive: Not Supported 00:13:51.332 Namespace Granularity: Not Supported 00:13:51.332 SQ Associations: Not Supported 00:13:51.332 UUID List: Not Supported 00:13:51.332 Multi-Domain Subsystem: Not Supported 00:13:51.332 Fixed Capacity Management: Not Supported 00:13:51.332 Variable Capacity Management: Not Supported 00:13:51.332 Delete Endurance Group: Not Supported 00:13:51.332 Delete NVM Set: Not Supported 00:13:51.332 Extended LBA Formats Supported: Not Supported 00:13:51.332 Flexible Data Placement Supported: Not Supported 00:13:51.332 00:13:51.332 Controller Memory Buffer Support 00:13:51.332 ================================ 00:13:51.332 Supported: No 00:13:51.332 00:13:51.332 Persistent Memory Region Support 00:13:51.332 ================================ 00:13:51.332 Supported: No 00:13:51.332 00:13:51.332 Admin Command Set Attributes 00:13:51.332 ============================ 00:13:51.332 Security Send/Receive: Not Supported 00:13:51.332 Format NVM: Not Supported 00:13:51.332 Firmware Activate/Download: Not Supported 00:13:51.332 Namespace Management: Not Supported 00:13:51.332 Device Self-Test: Not Supported 00:13:51.332 Directives: Not Supported 00:13:51.332 NVMe-MI: Not Supported 00:13:51.332 Virtualization Management: Not Supported 00:13:51.332 Doorbell Buffer Config: Not Supported 00:13:51.332 Get LBA Status Capability: Not Supported 00:13:51.332 Command & Feature Lockdown Capability: Not Supported 00:13:51.332 Abort Command Limit: 4 00:13:51.332 Async Event Request Limit: 4 00:13:51.332 Number of Firmware Slots: N/A 00:13:51.332 Firmware Slot 1 Read-Only: N/A 00:13:51.332 Firmware Activation Without Reset: N/A 00:13:51.332 Multiple Update Detection Support: N/A 00:13:51.332 Firmware Update Granularity: No Information Provided 00:13:51.332 Per-Namespace SMART Log: No 00:13:51.332 Asymmetric Namespace Access Log Page: Not Supported 00:13:51.332 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:51.332 Command Effects Log Page: Supported 00:13:51.332 Get Log Page Extended Data: Supported 00:13:51.332 Telemetry Log Pages: Not Supported 00:13:51.332 Persistent Event Log Pages: Not Supported 00:13:51.332 Supported Log Pages Log Page: May Support 00:13:51.332 Commands Supported & Effects Log Page: Not Supported 00:13:51.332 Feature Identifiers & Effects Log Page:May Support 00:13:51.332 NVMe-MI Commands & Effects Log Page: May Support 00:13:51.332 Data Area 4 for Telemetry Log: Not Supported 00:13:51.332 Error Log Page Entries Supported: 128 00:13:51.332 Keep Alive: Supported 00:13:51.332 Keep Alive Granularity: 10000 ms 00:13:51.332 00:13:51.332 NVM Command Set Attributes 00:13:51.332 ========================== 00:13:51.332 Submission Queue Entry Size 00:13:51.332 Max: 64 00:13:51.332 Min: 64 00:13:51.332 Completion Queue Entry Size 00:13:51.332 Max: 16 00:13:51.332 Min: 16 00:13:51.332 Number of Namespaces: 32 00:13:51.332 Compare Command: Supported 00:13:51.332 Write Uncorrectable Command: Not Supported 00:13:51.332 Dataset Management Command: Supported 00:13:51.332 Write Zeroes Command: Supported 00:13:51.332 Set Features Save Field: Not Supported 00:13:51.332 Reservations: Not Supported 00:13:51.332 Timestamp: Not Supported 00:13:51.332 Copy: Supported 00:13:51.332 Volatile Write Cache: Present 00:13:51.332 Atomic Write Unit (Normal): 1 00:13:51.332 Atomic Write Unit (PFail): 1 00:13:51.332 Atomic Compare & Write Unit: 1 00:13:51.332 Fused Compare & Write: Supported 00:13:51.332 Scatter-Gather List 00:13:51.332 SGL Command Set: Supported (Dword aligned) 00:13:51.332 SGL Keyed: Not Supported 00:13:51.332 SGL Bit Bucket Descriptor: Not Supported 00:13:51.332 SGL Metadata Pointer: Not Supported 00:13:51.332 Oversized SGL: Not Supported 00:13:51.332 SGL Metadata Address: Not Supported 00:13:51.332 SGL Offset: Not Supported 00:13:51.332 Transport SGL Data Block: Not Supported 00:13:51.332 Replay Protected Memory Block: Not Supported 00:13:51.332 00:13:51.332 Firmware Slot Information 00:13:51.332 ========================= 00:13:51.332 Active slot: 1 00:13:51.332 Slot 1 Firmware Revision: 25.01 00:13:51.332 00:13:51.332 00:13:51.332 Commands Supported and Effects 00:13:51.332 ============================== 00:13:51.332 Admin Commands 00:13:51.332 -------------- 00:13:51.332 Get Log Page (02h): Supported 00:13:51.332 Identify (06h): Supported 00:13:51.332 Abort (08h): Supported 00:13:51.332 Set Features (09h): Supported 00:13:51.332 Get Features (0Ah): Supported 00:13:51.332 Asynchronous Event Request (0Ch): Supported 00:13:51.332 Keep Alive (18h): Supported 00:13:51.332 I/O Commands 00:13:51.332 ------------ 00:13:51.332 Flush (00h): Supported LBA-Change 00:13:51.332 Write (01h): Supported LBA-Change 00:13:51.332 Read (02h): Supported 00:13:51.332 Compare (05h): Supported 00:13:51.332 Write Zeroes (08h): Supported LBA-Change 00:13:51.332 Dataset Management (09h): Supported LBA-Change 00:13:51.332 Copy (19h): Supported LBA-Change 00:13:51.332 00:13:51.332 Error Log 00:13:51.332 ========= 00:13:51.332 00:13:51.332 Arbitration 00:13:51.332 =========== 00:13:51.332 Arbitration Burst: 1 00:13:51.332 00:13:51.332 Power Management 00:13:51.332 ================ 00:13:51.332 Number of Power States: 1 00:13:51.332 Current Power State: Power State #0 00:13:51.332 Power State #0: 00:13:51.332 Max Power: 0.00 W 00:13:51.332 Non-Operational State: Operational 00:13:51.332 Entry Latency: Not Reported 00:13:51.332 Exit Latency: Not Reported 00:13:51.332 Relative Read Throughput: 0 00:13:51.332 Relative Read Latency: 0 00:13:51.332 Relative Write Throughput: 0 00:13:51.332 Relative Write Latency: 0 00:13:51.332 Idle Power: Not Reported 00:13:51.332 Active Power: Not Reported 00:13:51.332 Non-Operational Permissive Mode: Not Supported 00:13:51.332 00:13:51.332 Health Information 00:13:51.332 ================== 00:13:51.332 Critical Warnings: 00:13:51.332 Available Spare Space: OK 00:13:51.332 Temperature: OK 00:13:51.332 Device Reliability: OK 00:13:51.332 Read Only: No 00:13:51.332 Volatile Memory Backup: OK 00:13:51.332 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:51.332 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:51.332 Available Spare: 0% 00:13:51.333 Available Sp[2024-11-20 07:09:55.754082] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:51.333 [2024-11-20 07:09:55.761954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:51.333 [2024-11-20 07:09:55.761984] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:13:51.333 [2024-11-20 07:09:55.761993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.333 [2024-11-20 07:09:55.761999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.333 [2024-11-20 07:09:55.762005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.333 [2024-11-20 07:09:55.762010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.333 [2024-11-20 07:09:55.762219] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:51.333 [2024-11-20 07:09:55.762229] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:51.333 [2024-11-20 07:09:55.763219] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:51.333 [2024-11-20 07:09:55.763263] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:13:51.333 [2024-11-20 07:09:55.763269] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:13:51.333 [2024-11-20 07:09:55.764230] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:51.333 [2024-11-20 07:09:55.764242] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:13:51.333 [2024-11-20 07:09:55.764287] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:51.333 [2024-11-20 07:09:55.766954] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:51.333 are Threshold: 0% 00:13:51.333 Life Percentage Used: 0% 00:13:51.333 Data Units Read: 0 00:13:51.333 Data Units Written: 0 00:13:51.333 Host Read Commands: 0 00:13:51.333 Host Write Commands: 0 00:13:51.333 Controller Busy Time: 0 minutes 00:13:51.333 Power Cycles: 0 00:13:51.333 Power On Hours: 0 hours 00:13:51.333 Unsafe Shutdowns: 0 00:13:51.333 Unrecoverable Media Errors: 0 00:13:51.333 Lifetime Error Log Entries: 0 00:13:51.333 Warning Temperature Time: 0 minutes 00:13:51.333 Critical Temperature Time: 0 minutes 00:13:51.333 00:13:51.333 Number of Queues 00:13:51.333 ================ 00:13:51.333 Number of I/O Submission Queues: 127 00:13:51.333 Number of I/O Completion Queues: 127 00:13:51.333 00:13:51.333 Active Namespaces 00:13:51.333 ================= 00:13:51.333 Namespace ID:1 00:13:51.333 Error Recovery Timeout: Unlimited 00:13:51.333 Command Set Identifier: NVM (00h) 00:13:51.333 Deallocate: Supported 00:13:51.333 Deallocated/Unwritten Error: Not Supported 00:13:51.333 Deallocated Read Value: Unknown 00:13:51.333 Deallocate in Write Zeroes: Not Supported 00:13:51.333 Deallocated Guard Field: 0xFFFF 00:13:51.333 Flush: Supported 00:13:51.333 Reservation: Supported 00:13:51.333 Namespace Sharing Capabilities: Multiple Controllers 00:13:51.333 Size (in LBAs): 131072 (0GiB) 00:13:51.333 Capacity (in LBAs): 131072 (0GiB) 00:13:51.333 Utilization (in LBAs): 131072 (0GiB) 00:13:51.333 NGUID: 8C117A1DA5FE405285523C94AFFB9414 00:13:51.333 UUID: 8c117a1d-a5fe-4052-8552-3c94affb9414 00:13:51.333 Thin Provisioning: Not Supported 00:13:51.333 Per-NS Atomic Units: Yes 00:13:51.333 Atomic Boundary Size (Normal): 0 00:13:51.333 Atomic Boundary Size (PFail): 0 00:13:51.333 Atomic Boundary Offset: 0 00:13:51.333 Maximum Single Source Range Length: 65535 00:13:51.333 Maximum Copy Length: 65535 00:13:51.333 Maximum Source Range Count: 1 00:13:51.333 NGUID/EUI64 Never Reused: No 00:13:51.333 Namespace Write Protected: No 00:13:51.333 Number of LBA Formats: 1 00:13:51.333 Current LBA Format: LBA Format #00 00:13:51.333 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:51.333 00:13:51.333 07:09:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:51.592 [2024-11-20 07:09:56.004512] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:56.865 Initializing NVMe Controllers 00:13:56.865 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:56.865 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:56.865 Initialization complete. Launching workers. 00:13:56.865 ======================================================== 00:13:56.865 Latency(us) 00:13:56.865 Device Information : IOPS MiB/s Average min max 00:13:56.865 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39907.67 155.89 3207.21 950.94 7673.12 00:13:56.865 ======================================================== 00:13:56.865 Total : 39907.67 155.89 3207.21 950.94 7673.12 00:13:56.865 00:13:56.865 [2024-11-20 07:10:01.107211] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:56.865 07:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:56.865 [2024-11-20 07:10:01.342877] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:02.136 Initializing NVMe Controllers 00:14:02.136 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:02.136 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:02.136 Initialization complete. Launching workers. 00:14:02.136 ======================================================== 00:14:02.136 Latency(us) 00:14:02.136 Device Information : IOPS MiB/s Average min max 00:14:02.136 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39953.58 156.07 3203.53 952.83 7130.44 00:14:02.136 ======================================================== 00:14:02.136 Total : 39953.58 156.07 3203.53 952.83 7130.44 00:14:02.136 00:14:02.136 [2024-11-20 07:10:06.360611] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:02.136 07:10:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:02.136 [2024-11-20 07:10:06.566004] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:07.411 [2024-11-20 07:10:11.703044] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:07.411 Initializing NVMe Controllers 00:14:07.411 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:07.411 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:07.411 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:07.411 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:07.411 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:07.411 Initialization complete. Launching workers. 00:14:07.411 Starting thread on core 2 00:14:07.411 Starting thread on core 3 00:14:07.411 Starting thread on core 1 00:14:07.411 07:10:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:07.671 [2024-11-20 07:10:11.999365] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:10.962 [2024-11-20 07:10:15.076930] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:10.962 Initializing NVMe Controllers 00:14:10.962 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:10.962 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:10.962 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:10.962 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:10.962 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:10.962 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:10.962 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:10.962 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:10.962 Initialization complete. Launching workers. 00:14:10.962 Starting thread on core 1 with urgent priority queue 00:14:10.962 Starting thread on core 2 with urgent priority queue 00:14:10.962 Starting thread on core 3 with urgent priority queue 00:14:10.962 Starting thread on core 0 with urgent priority queue 00:14:10.962 SPDK bdev Controller (SPDK2 ) core 0: 8115.33 IO/s 12.32 secs/100000 ios 00:14:10.962 SPDK bdev Controller (SPDK2 ) core 1: 8712.00 IO/s 11.48 secs/100000 ios 00:14:10.962 SPDK bdev Controller (SPDK2 ) core 2: 7647.00 IO/s 13.08 secs/100000 ios 00:14:10.962 SPDK bdev Controller (SPDK2 ) core 3: 8845.67 IO/s 11.30 secs/100000 ios 00:14:10.962 ======================================================== 00:14:10.962 00:14:10.962 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:10.962 [2024-11-20 07:10:15.366342] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:10.962 Initializing NVMe Controllers 00:14:10.962 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:10.962 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:10.962 Namespace ID: 1 size: 0GB 00:14:10.962 Initialization complete. 00:14:10.962 INFO: using host memory buffer for IO 00:14:10.962 Hello world! 00:14:10.962 [2024-11-20 07:10:15.376407] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:10.962 07:10:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:11.220 [2024-11-20 07:10:15.659918] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:12.597 Initializing NVMe Controllers 00:14:12.597 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:12.597 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:12.597 Initialization complete. Launching workers. 00:14:12.597 submit (in ns) avg, min, max = 6621.9, 3220.9, 4000497.4 00:14:12.597 complete (in ns) avg, min, max = 22028.1, 1778.3, 7986699.1 00:14:12.597 00:14:12.597 Submit histogram 00:14:12.597 ================ 00:14:12.597 Range in us Cumulative Count 00:14:12.597 3.214 - 3.228: 0.0061% ( 1) 00:14:12.597 3.228 - 3.242: 0.0243% ( 3) 00:14:12.597 3.242 - 3.256: 0.0364% ( 2) 00:14:12.598 3.256 - 3.270: 0.0728% ( 6) 00:14:12.598 3.270 - 3.283: 0.3337% ( 43) 00:14:12.598 3.283 - 3.297: 1.6322% ( 214) 00:14:12.598 3.297 - 3.311: 4.4961% ( 472) 00:14:12.598 3.311 - 3.325: 7.9243% ( 565) 00:14:12.598 3.325 - 3.339: 11.7044% ( 623) 00:14:12.598 3.339 - 3.353: 17.0499% ( 881) 00:14:12.598 3.353 - 3.367: 22.4926% ( 897) 00:14:12.598 3.367 - 3.381: 27.6500% ( 850) 00:14:12.598 3.381 - 3.395: 33.2261% ( 919) 00:14:12.598 3.395 - 3.409: 38.5171% ( 872) 00:14:12.598 3.409 - 3.423: 42.9646% ( 733) 00:14:12.598 3.423 - 3.437: 47.5214% ( 751) 00:14:12.598 3.437 - 3.450: 53.6921% ( 1017) 00:14:12.598 3.450 - 3.464: 59.6141% ( 976) 00:14:12.598 3.464 - 3.478: 63.5398% ( 647) 00:14:12.598 3.478 - 3.492: 68.2604% ( 778) 00:14:12.598 3.492 - 3.506: 73.7031% ( 897) 00:14:12.598 3.506 - 3.520: 78.2113% ( 743) 00:14:12.598 3.520 - 3.534: 80.9417% ( 450) 00:14:12.598 3.534 - 3.548: 83.3991% ( 405) 00:14:12.598 3.548 - 3.562: 85.3407% ( 320) 00:14:12.598 3.562 - 3.590: 87.6342% ( 378) 00:14:12.598 3.590 - 3.617: 88.9084% ( 210) 00:14:12.598 3.617 - 3.645: 90.2494% ( 221) 00:14:12.598 3.645 - 3.673: 91.7420% ( 246) 00:14:12.598 3.673 - 3.701: 93.3924% ( 272) 00:14:12.598 3.701 - 3.729: 95.2187% ( 301) 00:14:12.598 3.729 - 3.757: 96.6204% ( 231) 00:14:12.598 3.757 - 3.784: 97.7064% ( 179) 00:14:12.598 3.784 - 3.812: 98.5863% ( 145) 00:14:12.598 3.812 - 3.840: 99.0838% ( 82) 00:14:12.598 3.840 - 3.868: 99.3690% ( 47) 00:14:12.598 3.868 - 3.896: 99.5571% ( 31) 00:14:12.598 3.896 - 3.923: 99.6541% ( 16) 00:14:12.598 3.923 - 3.951: 99.6723% ( 3) 00:14:12.598 5.370 - 5.398: 99.6784% ( 1) 00:14:12.598 5.398 - 5.426: 99.6845% ( 1) 00:14:12.598 5.565 - 5.593: 99.6966% ( 2) 00:14:12.598 6.038 - 6.066: 99.7027% ( 1) 00:14:12.598 6.150 - 6.177: 99.7088% ( 1) 00:14:12.598 6.317 - 6.344: 99.7148% ( 1) 00:14:12.598 6.456 - 6.483: 99.7270% ( 2) 00:14:12.598 6.483 - 6.511: 99.7330% ( 1) 00:14:12.598 6.706 - 6.734: 99.7391% ( 1) 00:14:12.598 6.762 - 6.790: 99.7452% ( 1) 00:14:12.598 6.873 - 6.901: 99.7512% ( 1) 00:14:12.598 6.984 - 7.012: 99.7573% ( 1) 00:14:12.598 7.040 - 7.068: 99.7634% ( 1) 00:14:12.598 7.290 - 7.346: 99.7694% ( 1) 00:14:12.598 7.346 - 7.402: 99.7755% ( 1) 00:14:12.598 7.457 - 7.513: 99.7816% ( 1) 00:14:12.598 7.791 - 7.847: 99.7876% ( 1) 00:14:12.598 7.847 - 7.903: 99.7937% ( 1) 00:14:12.598 7.903 - 7.958: 99.7998% ( 1) 00:14:12.598 8.014 - 8.070: 99.8119% ( 2) 00:14:12.598 8.070 - 8.125: 99.8180% ( 1) 00:14:12.598 8.125 - 8.181: 99.8240% ( 1) 00:14:12.598 8.181 - 8.237: 99.8301% ( 1) 00:14:12.598 8.292 - 8.348: 99.8362% ( 1) 00:14:12.598 8.348 - 8.403: 99.8483% ( 2) 00:14:12.598 8.403 - 8.459: 99.8544% ( 1) 00:14:12.598 8.459 - 8.515: 99.8604% ( 1) 00:14:12.598 8.515 - 8.570: 99.8665% ( 1) 00:14:12.598 8.737 - 8.793: 99.8726% ( 1) 00:14:12.598 8.960 - 9.016: 99.8786% ( 1) 00:14:12.598 9.016 - 9.071: 99.8847% ( 1) 00:14:12.598 9.127 - 9.183: 99.8908% ( 1) 00:14:12.598 9.517 - 9.572: 99.8969% ( 1) 00:14:12.598 9.906 - 9.962: 99.9029% ( 1) 00:14:12.598 10.407 - 10.463: 99.9090% ( 1) 00:14:12.598 12.243 - 12.299: 99.9151% ( 1) 00:14:12.598 15.360 - 15.471: 99.9211% ( 1) 00:14:12.598 3989.148 - 4017.642: 100.0000% ( 13) 00:14:12.598 00:14:12.598 Complete histogram 00:14:12.598 ================== 00:14:12.598 Range in us Cumulative Count 00:14:12.598 1.774 - [2024-11-20 07:10:16.758041] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:12.598 1.781: 0.0121% ( 2) 00:14:12.598 1.781 - 1.795: 0.0364% ( 4) 00:14:12.598 1.795 - 1.809: 0.0485% ( 2) 00:14:12.598 1.809 - 1.823: 0.0667% ( 3) 00:14:12.598 1.823 - 1.837: 3.0945% ( 499) 00:14:12.598 1.837 - 1.850: 17.5778% ( 2387) 00:14:12.598 1.850 - 1.864: 25.0956% ( 1239) 00:14:12.598 1.864 - 1.878: 27.2738% ( 359) 00:14:12.598 1.878 - 1.892: 39.5607% ( 2025) 00:14:12.598 1.892 - 1.906: 75.6629% ( 5950) 00:14:12.598 1.906 - 1.920: 91.3415% ( 2584) 00:14:12.598 1.920 - 1.934: 95.6313% ( 707) 00:14:12.598 1.934 - 1.948: 97.1118% ( 244) 00:14:12.598 1.948 - 1.962: 98.0038% ( 147) 00:14:12.598 1.962 - 1.976: 98.8229% ( 135) 00:14:12.598 1.976 - 1.990: 99.1141% ( 48) 00:14:12.598 1.990 - 2.003: 99.2294% ( 19) 00:14:12.598 2.003 - 2.017: 99.2355% ( 1) 00:14:12.598 2.017 - 2.031: 99.2598% ( 4) 00:14:12.598 2.031 - 2.045: 99.2658% ( 1) 00:14:12.598 2.045 - 2.059: 99.2719% ( 1) 00:14:12.598 2.059 - 2.073: 99.2840% ( 2) 00:14:12.598 2.073 - 2.087: 99.2962% ( 2) 00:14:12.598 2.087 - 2.101: 99.3022% ( 1) 00:14:12.598 2.115 - 2.129: 99.3083% ( 1) 00:14:12.598 2.129 - 2.143: 99.3204% ( 2) 00:14:12.598 2.157 - 2.170: 99.3265% ( 1) 00:14:12.598 2.184 - 2.198: 99.3326% ( 1) 00:14:12.598 4.174 - 4.202: 99.3386% ( 1) 00:14:12.598 4.313 - 4.341: 99.3447% ( 1) 00:14:12.598 4.397 - 4.424: 99.3508% ( 1) 00:14:12.598 4.480 - 4.508: 99.3568% ( 1) 00:14:12.598 4.563 - 4.591: 99.3629% ( 1) 00:14:12.599 4.758 - 4.786: 99.3690% ( 1) 00:14:12.599 5.009 - 5.037: 99.3750% ( 1) 00:14:12.599 5.064 - 5.092: 99.3811% ( 1) 00:14:12.599 5.510 - 5.537: 99.3872% ( 1) 00:14:12.599 6.066 - 6.094: 99.3932% ( 1) 00:14:12.599 6.150 - 6.177: 99.3993% ( 1) 00:14:12.599 6.177 - 6.205: 99.4054% ( 1) 00:14:12.599 6.317 - 6.344: 99.4114% ( 1) 00:14:12.599 6.428 - 6.456: 99.4175% ( 1) 00:14:12.599 6.483 - 6.511: 99.4236% ( 1) 00:14:12.599 6.539 - 6.567: 99.4296% ( 1) 00:14:12.599 6.567 - 6.595: 99.4357% ( 1) 00:14:12.599 6.650 - 6.678: 99.4418% ( 1) 00:14:12.599 6.734 - 6.762: 99.4478% ( 1) 00:14:12.599 7.012 - 7.040: 99.4539% ( 1) 00:14:12.599 7.123 - 7.179: 99.4600% ( 1) 00:14:12.599 7.457 - 7.513: 99.4661% ( 1) 00:14:12.599 7.680 - 7.736: 99.4721% ( 1) 00:14:12.599 8.626 - 8.682: 99.4782% ( 1) 00:14:12.599 12.410 - 12.466: 99.4843% ( 1) 00:14:12.599 36.285 - 36.508: 99.4903% ( 1) 00:14:12.599 38.957 - 39.179: 99.4964% ( 1) 00:14:12.599 142.470 - 143.360: 99.5025% ( 1) 00:14:12.599 3989.148 - 4017.642: 99.9939% ( 81) 00:14:12.599 7978.296 - 8035.283: 100.0000% ( 1) 00:14:12.599 00:14:12.599 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:12.599 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:12.599 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:12.599 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:12.599 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:12.599 [ 00:14:12.599 { 00:14:12.599 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:12.599 "subtype": "Discovery", 00:14:12.599 "listen_addresses": [], 00:14:12.599 "allow_any_host": true, 00:14:12.599 "hosts": [] 00:14:12.599 }, 00:14:12.599 { 00:14:12.599 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:12.599 "subtype": "NVMe", 00:14:12.599 "listen_addresses": [ 00:14:12.599 { 00:14:12.599 "trtype": "VFIOUSER", 00:14:12.599 "adrfam": "IPv4", 00:14:12.599 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:12.599 "trsvcid": "0" 00:14:12.599 } 00:14:12.599 ], 00:14:12.599 "allow_any_host": true, 00:14:12.599 "hosts": [], 00:14:12.599 "serial_number": "SPDK1", 00:14:12.599 "model_number": "SPDK bdev Controller", 00:14:12.599 "max_namespaces": 32, 00:14:12.599 "min_cntlid": 1, 00:14:12.599 "max_cntlid": 65519, 00:14:12.599 "namespaces": [ 00:14:12.599 { 00:14:12.599 "nsid": 1, 00:14:12.599 "bdev_name": "Malloc1", 00:14:12.599 "name": "Malloc1", 00:14:12.599 "nguid": "D4C4EECB654043B6A117CE89139BDECE", 00:14:12.599 "uuid": "d4c4eecb-6540-43b6-a117-ce89139bdece" 00:14:12.599 }, 00:14:12.599 { 00:14:12.599 "nsid": 2, 00:14:12.599 "bdev_name": "Malloc3", 00:14:12.599 "name": "Malloc3", 00:14:12.599 "nguid": "E024DED3B35C442F86DDD6EB08D803FA", 00:14:12.599 "uuid": "e024ded3-b35c-442f-86dd-d6eb08d803fa" 00:14:12.599 } 00:14:12.599 ] 00:14:12.599 }, 00:14:12.599 { 00:14:12.599 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:12.599 "subtype": "NVMe", 00:14:12.599 "listen_addresses": [ 00:14:12.599 { 00:14:12.599 "trtype": "VFIOUSER", 00:14:12.599 "adrfam": "IPv4", 00:14:12.599 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:12.599 "trsvcid": "0" 00:14:12.599 } 00:14:12.599 ], 00:14:12.599 "allow_any_host": true, 00:14:12.599 "hosts": [], 00:14:12.599 "serial_number": "SPDK2", 00:14:12.599 "model_number": "SPDK bdev Controller", 00:14:12.599 "max_namespaces": 32, 00:14:12.599 "min_cntlid": 1, 00:14:12.599 "max_cntlid": 65519, 00:14:12.599 "namespaces": [ 00:14:12.599 { 00:14:12.599 "nsid": 1, 00:14:12.599 "bdev_name": "Malloc2", 00:14:12.599 "name": "Malloc2", 00:14:12.599 "nguid": "8C117A1DA5FE405285523C94AFFB9414", 00:14:12.599 "uuid": "8c117a1d-a5fe-4052-8552-3c94affb9414" 00:14:12.599 } 00:14:12.599 ] 00:14:12.599 } 00:14:12.599 ] 00:14:12.599 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:12.599 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:12.599 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1160153 00:14:12.599 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:12.599 07:10:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:14:12.599 07:10:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:12.599 07:10:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:12.599 07:10:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:14:12.599 07:10:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:12.599 07:10:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:12.859 [2024-11-20 07:10:17.164375] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:12.859 Malloc4 00:14:12.859 07:10:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:12.859 [2024-11-20 07:10:17.384066] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:12.859 07:10:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:13.118 Asynchronous Event Request test 00:14:13.118 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:13.118 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:13.118 Registering asynchronous event callbacks... 00:14:13.118 Starting namespace attribute notice tests for all controllers... 00:14:13.118 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:13.118 aer_cb - Changed Namespace 00:14:13.118 Cleaning up... 00:14:13.118 [ 00:14:13.118 { 00:14:13.118 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:13.118 "subtype": "Discovery", 00:14:13.118 "listen_addresses": [], 00:14:13.118 "allow_any_host": true, 00:14:13.118 "hosts": [] 00:14:13.118 }, 00:14:13.118 { 00:14:13.118 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:13.118 "subtype": "NVMe", 00:14:13.118 "listen_addresses": [ 00:14:13.118 { 00:14:13.118 "trtype": "VFIOUSER", 00:14:13.118 "adrfam": "IPv4", 00:14:13.118 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:13.118 "trsvcid": "0" 00:14:13.118 } 00:14:13.118 ], 00:14:13.118 "allow_any_host": true, 00:14:13.118 "hosts": [], 00:14:13.118 "serial_number": "SPDK1", 00:14:13.118 "model_number": "SPDK bdev Controller", 00:14:13.118 "max_namespaces": 32, 00:14:13.118 "min_cntlid": 1, 00:14:13.118 "max_cntlid": 65519, 00:14:13.118 "namespaces": [ 00:14:13.118 { 00:14:13.118 "nsid": 1, 00:14:13.118 "bdev_name": "Malloc1", 00:14:13.118 "name": "Malloc1", 00:14:13.118 "nguid": "D4C4EECB654043B6A117CE89139BDECE", 00:14:13.118 "uuid": "d4c4eecb-6540-43b6-a117-ce89139bdece" 00:14:13.118 }, 00:14:13.118 { 00:14:13.118 "nsid": 2, 00:14:13.118 "bdev_name": "Malloc3", 00:14:13.118 "name": "Malloc3", 00:14:13.118 "nguid": "E024DED3B35C442F86DDD6EB08D803FA", 00:14:13.118 "uuid": "e024ded3-b35c-442f-86dd-d6eb08d803fa" 00:14:13.118 } 00:14:13.118 ] 00:14:13.118 }, 00:14:13.118 { 00:14:13.118 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:13.118 "subtype": "NVMe", 00:14:13.118 "listen_addresses": [ 00:14:13.118 { 00:14:13.118 "trtype": "VFIOUSER", 00:14:13.118 "adrfam": "IPv4", 00:14:13.118 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:13.118 "trsvcid": "0" 00:14:13.118 } 00:14:13.118 ], 00:14:13.118 "allow_any_host": true, 00:14:13.118 "hosts": [], 00:14:13.118 "serial_number": "SPDK2", 00:14:13.118 "model_number": "SPDK bdev Controller", 00:14:13.118 "max_namespaces": 32, 00:14:13.118 "min_cntlid": 1, 00:14:13.118 "max_cntlid": 65519, 00:14:13.118 "namespaces": [ 00:14:13.118 { 00:14:13.118 "nsid": 1, 00:14:13.118 "bdev_name": "Malloc2", 00:14:13.118 "name": "Malloc2", 00:14:13.118 "nguid": "8C117A1DA5FE405285523C94AFFB9414", 00:14:13.118 "uuid": "8c117a1d-a5fe-4052-8552-3c94affb9414" 00:14:13.118 }, 00:14:13.118 { 00:14:13.118 "nsid": 2, 00:14:13.118 "bdev_name": "Malloc4", 00:14:13.118 "name": "Malloc4", 00:14:13.118 "nguid": "5D3284C7F41941598C33A6C59BE205A2", 00:14:13.118 "uuid": "5d3284c7-f419-4159-8c33-a6c59be205a2" 00:14:13.118 } 00:14:13.118 ] 00:14:13.118 } 00:14:13.118 ] 00:14:13.118 07:10:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1160153 00:14:13.118 07:10:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:13.118 07:10:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1152532 00:14:13.118 07:10:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 1152532 ']' 00:14:13.118 07:10:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 1152532 00:14:13.118 07:10:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:14:13.118 07:10:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:13.118 07:10:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1152532 00:14:13.118 07:10:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:13.118 07:10:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:13.118 07:10:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1152532' 00:14:13.118 killing process with pid 1152532 00:14:13.118 07:10:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 1152532 00:14:13.118 07:10:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 1152532 00:14:13.378 07:10:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:13.378 07:10:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:13.378 07:10:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:13.378 07:10:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:13.378 07:10:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:13.378 07:10:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1160354 00:14:13.378 07:10:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1160354' 00:14:13.378 Process pid: 1160354 00:14:13.378 07:10:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:13.378 07:10:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:13.378 07:10:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1160354 00:14:13.378 07:10:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 1160354 ']' 00:14:13.378 07:10:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.378 07:10:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:13.378 07:10:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.378 07:10:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:13.378 07:10:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:13.660 [2024-11-20 07:10:17.950039] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:13.660 [2024-11-20 07:10:17.950924] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:14:13.660 [2024-11-20 07:10:17.950965] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:13.660 [2024-11-20 07:10:18.026547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:13.660 [2024-11-20 07:10:18.066904] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:13.660 [2024-11-20 07:10:18.066942] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:13.660 [2024-11-20 07:10:18.066953] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:13.660 [2024-11-20 07:10:18.066960] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:13.660 [2024-11-20 07:10:18.066965] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:13.660 [2024-11-20 07:10:18.068396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:13.660 [2024-11-20 07:10:18.068503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:13.660 [2024-11-20 07:10:18.068612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.660 [2024-11-20 07:10:18.068613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:13.660 [2024-11-20 07:10:18.137091] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:13.660 [2024-11-20 07:10:18.137852] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:13.660 [2024-11-20 07:10:18.138003] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:13.660 [2024-11-20 07:10:18.138295] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:13.660 [2024-11-20 07:10:18.138360] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:13.660 07:10:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:13.661 07:10:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:14:13.661 07:10:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:14.650 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:14.909 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:14.909 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:14.909 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:14.909 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:14.909 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:15.168 Malloc1 00:14:15.168 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:15.426 07:10:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:15.686 07:10:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:15.686 07:10:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:15.686 07:10:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:15.686 07:10:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:15.945 Malloc2 00:14:15.945 07:10:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:16.203 07:10:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:16.462 07:10:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:16.721 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:16.721 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1160354 00:14:16.721 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 1160354 ']' 00:14:16.721 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 1160354 00:14:16.721 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:14:16.721 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:16.721 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1160354 00:14:16.721 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:16.721 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:16.721 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1160354' 00:14:16.721 killing process with pid 1160354 00:14:16.721 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 1160354 00:14:16.721 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 1160354 00:14:16.981 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:16.981 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:16.981 00:14:16.981 real 0m50.907s 00:14:16.981 user 3m16.765s 00:14:16.981 sys 0m3.375s 00:14:16.981 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:16.981 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:16.981 ************************************ 00:14:16.981 END TEST nvmf_vfio_user 00:14:16.981 ************************************ 00:14:16.981 07:10:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:16.981 07:10:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:16.981 07:10:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:16.981 07:10:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:16.981 ************************************ 00:14:16.981 START TEST nvmf_vfio_user_nvme_compliance 00:14:16.981 ************************************ 00:14:16.981 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:16.981 * Looking for test storage... 00:14:16.981 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:16.981 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:16.981 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:14:16.981 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:17.241 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:17.241 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:17.241 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:17.241 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:17.241 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:14:17.241 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:14:17.241 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:14:17.241 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:14:17.241 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:14:17.241 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:14:17.241 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:14:17.241 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:17.241 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:14:17.241 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:14:17.241 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:17.241 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:17.241 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:14:17.241 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:14:17.241 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:17.241 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:14:17.241 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:14:17.241 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:14:17.241 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:14:17.241 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:17.241 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:14:17.241 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:14:17.241 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:17.241 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:17.241 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:14:17.241 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:17.241 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:17.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.241 --rc genhtml_branch_coverage=1 00:14:17.241 --rc genhtml_function_coverage=1 00:14:17.241 --rc genhtml_legend=1 00:14:17.241 --rc geninfo_all_blocks=1 00:14:17.241 --rc geninfo_unexecuted_blocks=1 00:14:17.241 00:14:17.241 ' 00:14:17.241 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:17.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.241 --rc genhtml_branch_coverage=1 00:14:17.241 --rc genhtml_function_coverage=1 00:14:17.241 --rc genhtml_legend=1 00:14:17.241 --rc geninfo_all_blocks=1 00:14:17.241 --rc geninfo_unexecuted_blocks=1 00:14:17.241 00:14:17.241 ' 00:14:17.241 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:17.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.241 --rc genhtml_branch_coverage=1 00:14:17.241 --rc genhtml_function_coverage=1 00:14:17.241 --rc genhtml_legend=1 00:14:17.241 --rc geninfo_all_blocks=1 00:14:17.241 --rc geninfo_unexecuted_blocks=1 00:14:17.241 00:14:17.241 ' 00:14:17.241 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:17.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.241 --rc genhtml_branch_coverage=1 00:14:17.241 --rc genhtml_function_coverage=1 00:14:17.241 --rc genhtml_legend=1 00:14:17.241 --rc geninfo_all_blocks=1 00:14:17.241 --rc geninfo_unexecuted_blocks=1 00:14:17.241 00:14:17.241 ' 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:17.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1160945 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1160945' 00:14:17.242 Process pid: 1160945 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1160945 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # '[' -z 1160945 ']' 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:17.242 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:17.242 [2024-11-20 07:10:21.639788] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:14:17.242 [2024-11-20 07:10:21.639840] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:17.242 [2024-11-20 07:10:21.713722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:17.242 [2024-11-20 07:10:21.755877] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:17.243 [2024-11-20 07:10:21.755915] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:17.243 [2024-11-20 07:10:21.755923] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:17.243 [2024-11-20 07:10:21.755929] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:17.243 [2024-11-20 07:10:21.755933] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:17.243 [2024-11-20 07:10:21.757255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.243 [2024-11-20 07:10:21.757359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.243 [2024-11-20 07:10:21.757360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:17.502 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:17.502 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@866 -- # return 0 00:14:17.502 07:10:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:18.439 07:10:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:18.439 07:10:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:18.439 07:10:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:18.439 07:10:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.439 07:10:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:18.439 07:10:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.439 07:10:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:18.439 07:10:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:18.439 07:10:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.439 07:10:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:18.439 malloc0 00:14:18.439 07:10:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.439 07:10:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:18.439 07:10:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.439 07:10:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:18.439 07:10:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.439 07:10:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:18.439 07:10:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.439 07:10:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:18.439 07:10:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.439 07:10:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:18.439 07:10:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.439 07:10:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:18.439 07:10:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.439 07:10:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:18.699 00:14:18.699 00:14:18.699 CUnit - A unit testing framework for C - Version 2.1-3 00:14:18.699 http://cunit.sourceforge.net/ 00:14:18.699 00:14:18.699 00:14:18.699 Suite: nvme_compliance 00:14:18.699 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-20 07:10:23.090406] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.699 [2024-11-20 07:10:23.091738] vfio_user.c: 800:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:18.699 [2024-11-20 07:10:23.091754] vfio_user.c:5503:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:18.699 [2024-11-20 07:10:23.091760] vfio_user.c:5596:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:18.699 [2024-11-20 07:10:23.093423] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:18.699 passed 00:14:18.699 Test: admin_identify_ctrlr_verify_fused ...[2024-11-20 07:10:23.172964] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.699 [2024-11-20 07:10:23.175982] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:18.699 passed 00:14:18.958 Test: admin_identify_ns ...[2024-11-20 07:10:23.253976] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.958 [2024-11-20 07:10:23.315959] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:18.958 [2024-11-20 07:10:23.323957] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:18.958 [2024-11-20 07:10:23.345054] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:18.958 passed 00:14:18.958 Test: admin_get_features_mandatory_features ...[2024-11-20 07:10:23.421959] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.958 [2024-11-20 07:10:23.424976] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:18.958 passed 00:14:18.958 Test: admin_get_features_optional_features ...[2024-11-20 07:10:23.502516] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:18.958 [2024-11-20 07:10:23.505539] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:19.218 passed 00:14:19.218 Test: admin_set_features_number_of_queues ...[2024-11-20 07:10:23.584403] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:19.218 [2024-11-20 07:10:23.690034] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:19.218 passed 00:14:19.218 Test: admin_get_log_page_mandatory_logs ...[2024-11-20 07:10:23.763939] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:19.477 [2024-11-20 07:10:23.769975] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:19.477 passed 00:14:19.477 Test: admin_get_log_page_with_lpo ...[2024-11-20 07:10:23.845369] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:19.477 [2024-11-20 07:10:23.912965] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:19.477 [2024-11-20 07:10:23.926000] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:19.477 passed 00:14:19.477 Test: fabric_property_get ...[2024-11-20 07:10:24.001761] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:19.477 [2024-11-20 07:10:24.002999] vfio_user.c:5596:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:19.477 [2024-11-20 07:10:24.007792] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:19.737 passed 00:14:19.737 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-20 07:10:24.085315] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:19.737 [2024-11-20 07:10:24.086541] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:19.737 [2024-11-20 07:10:24.088333] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:19.737 passed 00:14:19.737 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-20 07:10:24.166228] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:19.737 [2024-11-20 07:10:24.250951] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:19.737 [2024-11-20 07:10:24.266953] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:19.737 [2024-11-20 07:10:24.272026] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:19.996 passed 00:14:19.996 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-20 07:10:24.346065] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:19.996 [2024-11-20 07:10:24.347300] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:19.996 [2024-11-20 07:10:24.349091] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:19.996 passed 00:14:19.996 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-20 07:10:24.426838] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:19.996 [2024-11-20 07:10:24.501964] vfio_user.c:2315:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:19.996 [2024-11-20 07:10:24.525952] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:19.996 [2024-11-20 07:10:24.531025] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:20.256 passed 00:14:20.256 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-20 07:10:24.608900] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:20.256 [2024-11-20 07:10:24.610138] vfio_user.c:2154:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:20.256 [2024-11-20 07:10:24.610165] vfio_user.c:2148:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:20.256 [2024-11-20 07:10:24.613932] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:20.256 passed 00:14:20.256 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-20 07:10:24.692321] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:20.256 [2024-11-20 07:10:24.784961] vfio_user.c:2236:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:20.256 [2024-11-20 07:10:24.792958] vfio_user.c:2236:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:20.256 [2024-11-20 07:10:24.800961] vfio_user.c:2034:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:20.514 [2024-11-20 07:10:24.808960] vfio_user.c:2034:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:20.514 [2024-11-20 07:10:24.838054] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:20.514 passed 00:14:20.514 Test: admin_create_io_sq_verify_pc ...[2024-11-20 07:10:24.912090] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:20.514 [2024-11-20 07:10:24.930963] vfio_user.c:2047:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:20.514 [2024-11-20 07:10:24.948260] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:20.514 passed 00:14:20.514 Test: admin_create_io_qp_max_qps ...[2024-11-20 07:10:25.023774] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:21.890 [2024-11-20 07:10:26.124959] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:14:22.149 [2024-11-20 07:10:26.506930] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:22.149 passed 00:14:22.149 Test: admin_create_io_sq_shared_cq ...[2024-11-20 07:10:26.584056] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:22.408 [2024-11-20 07:10:26.717956] vfio_user.c:2315:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:22.408 [2024-11-20 07:10:26.755017] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:22.408 passed 00:14:22.408 00:14:22.408 Run Summary: Type Total Ran Passed Failed Inactive 00:14:22.408 suites 1 1 n/a 0 0 00:14:22.408 tests 18 18 18 0 0 00:14:22.408 asserts 360 360 360 0 n/a 00:14:22.408 00:14:22.408 Elapsed time = 1.505 seconds 00:14:22.408 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1160945 00:14:22.408 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # '[' -z 1160945 ']' 00:14:22.408 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # kill -0 1160945 00:14:22.408 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # uname 00:14:22.408 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:22.408 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1160945 00:14:22.408 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:22.408 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:22.408 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1160945' 00:14:22.408 killing process with pid 1160945 00:14:22.408 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@971 -- # kill 1160945 00:14:22.408 07:10:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@976 -- # wait 1160945 00:14:22.668 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:22.668 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:22.668 00:14:22.668 real 0m5.655s 00:14:22.668 user 0m15.811s 00:14:22.668 sys 0m0.520s 00:14:22.668 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:22.668 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:22.668 ************************************ 00:14:22.668 END TEST nvmf_vfio_user_nvme_compliance 00:14:22.668 ************************************ 00:14:22.668 07:10:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:22.668 07:10:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:22.668 07:10:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:22.668 07:10:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:22.668 ************************************ 00:14:22.668 START TEST nvmf_vfio_user_fuzz 00:14:22.668 ************************************ 00:14:22.668 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:22.668 * Looking for test storage... 00:14:22.668 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:22.668 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:22.668 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:14:22.668 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:22.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.928 --rc genhtml_branch_coverage=1 00:14:22.928 --rc genhtml_function_coverage=1 00:14:22.928 --rc genhtml_legend=1 00:14:22.928 --rc geninfo_all_blocks=1 00:14:22.928 --rc geninfo_unexecuted_blocks=1 00:14:22.928 00:14:22.928 ' 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:22.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.928 --rc genhtml_branch_coverage=1 00:14:22.928 --rc genhtml_function_coverage=1 00:14:22.928 --rc genhtml_legend=1 00:14:22.928 --rc geninfo_all_blocks=1 00:14:22.928 --rc geninfo_unexecuted_blocks=1 00:14:22.928 00:14:22.928 ' 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:22.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.928 --rc genhtml_branch_coverage=1 00:14:22.928 --rc genhtml_function_coverage=1 00:14:22.928 --rc genhtml_legend=1 00:14:22.928 --rc geninfo_all_blocks=1 00:14:22.928 --rc geninfo_unexecuted_blocks=1 00:14:22.928 00:14:22.928 ' 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:22.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:22.928 --rc genhtml_branch_coverage=1 00:14:22.928 --rc genhtml_function_coverage=1 00:14:22.928 --rc genhtml_legend=1 00:14:22.928 --rc geninfo_all_blocks=1 00:14:22.928 --rc geninfo_unexecuted_blocks=1 00:14:22.928 00:14:22.928 ' 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:22.928 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:22.929 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:14:22.929 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:22.929 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:22.929 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:22.929 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.929 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.929 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.929 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:22.929 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.929 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:14:22.929 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:22.929 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:22.929 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:22.929 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:22.929 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:22.929 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:22.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:22.929 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:22.929 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:22.929 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:22.929 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:22.929 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:22.929 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:22.929 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:22.929 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:22.929 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:22.929 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:22.929 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1161929 00:14:22.929 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1161929' 00:14:22.929 Process pid: 1161929 00:14:22.929 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:22.929 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1161929 00:14:22.929 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:22.929 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # '[' -z 1161929 ']' 00:14:22.929 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.929 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:22.929 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.929 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:22.929 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:23.189 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:23.189 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@866 -- # return 0 00:14:23.189 07:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:24.126 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:24.126 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.126 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:24.126 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.126 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:24.126 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:24.126 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.126 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:24.126 malloc0 00:14:24.126 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.126 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:24.126 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.126 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:24.126 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.126 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:24.126 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.126 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:24.126 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.126 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:24.126 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.126 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:24.126 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.126 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:24.126 07:10:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:56.208 Fuzzing completed. Shutting down the fuzz application 00:14:56.208 00:14:56.208 Dumping successful admin opcodes: 00:14:56.208 8, 9, 10, 24, 00:14:56.208 Dumping successful io opcodes: 00:14:56.208 0, 00:14:56.208 NS: 0x20000081ef00 I/O qp, Total commands completed: 1035650, total successful commands: 4085, random_seed: 1737690816 00:14:56.208 NS: 0x20000081ef00 admin qp, Total commands completed: 257365, total successful commands: 2076, random_seed: 1546237312 00:14:56.208 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:56.208 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.208 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:56.208 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.208 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1161929 00:14:56.208 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # '[' -z 1161929 ']' 00:14:56.208 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # kill -0 1161929 00:14:56.208 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # uname 00:14:56.208 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:56.208 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1161929 00:14:56.208 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:56.208 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:56.208 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1161929' 00:14:56.208 killing process with pid 1161929 00:14:56.208 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@971 -- # kill 1161929 00:14:56.208 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@976 -- # wait 1161929 00:14:56.208 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:56.208 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:56.208 00:14:56.209 real 0m32.237s 00:14:56.209 user 0m29.904s 00:14:56.209 sys 0m32.210s 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:56.209 ************************************ 00:14:56.209 END TEST nvmf_vfio_user_fuzz 00:14:56.209 ************************************ 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:56.209 ************************************ 00:14:56.209 START TEST nvmf_auth_target 00:14:56.209 ************************************ 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:56.209 * Looking for test storage... 00:14:56.209 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:56.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.209 --rc genhtml_branch_coverage=1 00:14:56.209 --rc genhtml_function_coverage=1 00:14:56.209 --rc genhtml_legend=1 00:14:56.209 --rc geninfo_all_blocks=1 00:14:56.209 --rc geninfo_unexecuted_blocks=1 00:14:56.209 00:14:56.209 ' 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:56.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.209 --rc genhtml_branch_coverage=1 00:14:56.209 --rc genhtml_function_coverage=1 00:14:56.209 --rc genhtml_legend=1 00:14:56.209 --rc geninfo_all_blocks=1 00:14:56.209 --rc geninfo_unexecuted_blocks=1 00:14:56.209 00:14:56.209 ' 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:56.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.209 --rc genhtml_branch_coverage=1 00:14:56.209 --rc genhtml_function_coverage=1 00:14:56.209 --rc genhtml_legend=1 00:14:56.209 --rc geninfo_all_blocks=1 00:14:56.209 --rc geninfo_unexecuted_blocks=1 00:14:56.209 00:14:56.209 ' 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:56.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:56.209 --rc genhtml_branch_coverage=1 00:14:56.209 --rc genhtml_function_coverage=1 00:14:56.209 --rc genhtml_legend=1 00:14:56.209 --rc geninfo_all_blocks=1 00:14:56.209 --rc geninfo_unexecuted_blocks=1 00:14:56.209 00:14:56.209 ' 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.209 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:56.210 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.210 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:14:56.210 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:56.210 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:56.210 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:56.210 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:56.210 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:56.210 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:56.210 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:56.210 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:56.210 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:56.210 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:56.210 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:56.210 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:56.210 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:56.210 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:56.210 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:56.210 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:56.210 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:56.210 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:14:56.210 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:56.210 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:56.210 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:56.210 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:56.210 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:56.210 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.210 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:56.210 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.210 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:56.210 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:56.210 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:14:56.210 07:10:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.487 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:01.487 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:01.488 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:01.488 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:01.488 Found net devices under 0000:86:00.0: cvl_0_0 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:01.488 Found net devices under 0000:86:00.1: cvl_0_1 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:01.488 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:01.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:01.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.456 ms 00:15:01.489 00:15:01.489 --- 10.0.0.2 ping statistics --- 00:15:01.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.489 rtt min/avg/max/mdev = 0.456/0.456/0.456/0.000 ms 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:01.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:01.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:15:01.489 00:15:01.489 --- 10.0.0.1 ping statistics --- 00:15:01.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:01.489 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1170392 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1170392 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 1170392 ']' 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1170458 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c0ec0d6c37dea2d483252c55d0441ce6928e6e87eafdaa6c 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.qV7 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c0ec0d6c37dea2d483252c55d0441ce6928e6e87eafdaa6c 0 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c0ec0d6c37dea2d483252c55d0441ce6928e6e87eafdaa6c 0 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c0ec0d6c37dea2d483252c55d0441ce6928e6e87eafdaa6c 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.qV7 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.qV7 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.qV7 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=893e8984439e09d9fa6c6d8069daef4d9f23ec1bfbbb4bb1b294dc9d264305e2 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.8a4 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 893e8984439e09d9fa6c6d8069daef4d9f23ec1bfbbb4bb1b294dc9d264305e2 3 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 893e8984439e09d9fa6c6d8069daef4d9f23ec1bfbbb4bb1b294dc9d264305e2 3 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=893e8984439e09d9fa6c6d8069daef4d9f23ec1bfbbb4bb1b294dc9d264305e2 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:01.489 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:01.490 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.8a4 00:15:01.490 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.8a4 00:15:01.490 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.8a4 00:15:01.490 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:15:01.490 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:01.490 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:01.490 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:01.490 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:01.490 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:01.490 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:01.490 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=00d1eeae4f24a4521b37ee2040cad3c9 00:15:01.490 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:01.490 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.79H 00:15:01.490 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 00d1eeae4f24a4521b37ee2040cad3c9 1 00:15:01.490 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 00d1eeae4f24a4521b37ee2040cad3c9 1 00:15:01.490 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:01.490 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:01.490 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=00d1eeae4f24a4521b37ee2040cad3c9 00:15:01.490 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:01.490 07:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:01.490 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.79H 00:15:01.490 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.79H 00:15:01.490 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.79H 00:15:01.490 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:15:01.490 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:01.490 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:01.490 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:01.490 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:01.490 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:01.490 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:01.749 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=90bc59d9cd91cd401cd588f4d0e164c59c397916ed393798 00:15:01.749 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:01.749 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.XYJ 00:15:01.749 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 90bc59d9cd91cd401cd588f4d0e164c59c397916ed393798 2 00:15:01.749 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 90bc59d9cd91cd401cd588f4d0e164c59c397916ed393798 2 00:15:01.749 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:01.749 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:01.749 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=90bc59d9cd91cd401cd588f4d0e164c59c397916ed393798 00:15:01.749 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:01.749 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:01.749 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.XYJ 00:15:01.749 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.XYJ 00:15:01.749 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.XYJ 00:15:01.749 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:15:01.749 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:01.749 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:01.749 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:01.749 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:01.749 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:01.749 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:01.749 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a564e7fe6310c3320be1d147aa84dc01fb242d14b376542e 00:15:01.749 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:01.749 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.owB 00:15:01.749 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a564e7fe6310c3320be1d147aa84dc01fb242d14b376542e 2 00:15:01.749 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a564e7fe6310c3320be1d147aa84dc01fb242d14b376542e 2 00:15:01.749 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:01.749 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:01.749 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a564e7fe6310c3320be1d147aa84dc01fb242d14b376542e 00:15:01.749 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:01.749 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:01.749 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.owB 00:15:01.749 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.owB 00:15:01.749 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.owB 00:15:01.749 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:15:01.749 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:01.749 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:01.750 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:01.750 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:01.750 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:01.750 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:01.750 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0fce65e4526086d6de27464bde839e5a 00:15:01.750 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:01.750 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.G6P 00:15:01.750 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0fce65e4526086d6de27464bde839e5a 1 00:15:01.750 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0fce65e4526086d6de27464bde839e5a 1 00:15:01.750 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:01.750 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:01.750 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0fce65e4526086d6de27464bde839e5a 00:15:01.750 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:01.750 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:01.750 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.G6P 00:15:01.750 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.G6P 00:15:01.750 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.G6P 00:15:01.750 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:15:01.750 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:01.750 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:01.750 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:01.750 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:01.750 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:01.750 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:01.750 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8c82d74e6346bd65cecf2cc13de94f71596ec01c1b78643b68064dc246eaf4d9 00:15:01.750 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:01.750 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.HlU 00:15:01.750 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8c82d74e6346bd65cecf2cc13de94f71596ec01c1b78643b68064dc246eaf4d9 3 00:15:01.750 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8c82d74e6346bd65cecf2cc13de94f71596ec01c1b78643b68064dc246eaf4d9 3 00:15:01.750 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:01.750 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:01.750 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8c82d74e6346bd65cecf2cc13de94f71596ec01c1b78643b68064dc246eaf4d9 00:15:01.750 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:01.750 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:01.750 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.HlU 00:15:01.750 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.HlU 00:15:01.750 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.HlU 00:15:01.750 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:15:01.750 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1170392 00:15:01.750 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 1170392 ']' 00:15:01.750 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.750 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:01.750 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.750 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:01.750 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.009 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:02.009 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:15:02.009 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1170458 /var/tmp/host.sock 00:15:02.009 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 1170458 ']' 00:15:02.009 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:15:02.009 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:02.009 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:02.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:02.009 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:02.009 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.268 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:02.268 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:15:02.268 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:15:02.268 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.268 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.268 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.268 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:02.268 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.qV7 00:15:02.268 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.268 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.268 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.268 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.qV7 00:15:02.268 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.qV7 00:15:02.527 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.8a4 ]] 00:15:02.527 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8a4 00:15:02.527 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.527 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.527 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.527 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8a4 00:15:02.527 07:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8a4 00:15:02.786 07:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:02.786 07:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.79H 00:15:02.786 07:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.786 07:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.786 07:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.786 07:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.79H 00:15:02.786 07:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.79H 00:15:02.786 07:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.XYJ ]] 00:15:02.786 07:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.XYJ 00:15:02.786 07:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.786 07:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.786 07:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.786 07:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.XYJ 00:15:02.786 07:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.XYJ 00:15:03.045 07:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:03.045 07:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.owB 00:15:03.045 07:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.045 07:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.045 07:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.045 07:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.owB 00:15:03.045 07:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.owB 00:15:03.303 07:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.G6P ]] 00:15:03.303 07:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.G6P 00:15:03.303 07:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.303 07:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.303 07:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.304 07:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.G6P 00:15:03.304 07:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.G6P 00:15:03.563 07:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:03.563 07:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.HlU 00:15:03.563 07:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.563 07:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.563 07:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.563 07:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.HlU 00:15:03.563 07:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.HlU 00:15:03.563 07:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:15:03.563 07:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:03.822 07:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:03.822 07:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:03.822 07:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:03.822 07:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:03.822 07:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:15:03.822 07:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:03.822 07:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:03.822 07:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:03.822 07:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:03.822 07:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.822 07:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:03.822 07:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.822 07:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.822 07:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.822 07:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:03.822 07:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:03.822 07:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.081 00:15:04.081 07:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:04.081 07:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:04.081 07:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.340 07:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.340 07:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.340 07:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.340 07:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.340 07:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.340 07:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:04.340 { 00:15:04.340 "cntlid": 1, 00:15:04.340 "qid": 0, 00:15:04.340 "state": "enabled", 00:15:04.340 "thread": "nvmf_tgt_poll_group_000", 00:15:04.340 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:04.340 "listen_address": { 00:15:04.340 "trtype": "TCP", 00:15:04.340 "adrfam": "IPv4", 00:15:04.340 "traddr": "10.0.0.2", 00:15:04.340 "trsvcid": "4420" 00:15:04.340 }, 00:15:04.340 "peer_address": { 00:15:04.340 "trtype": "TCP", 00:15:04.340 "adrfam": "IPv4", 00:15:04.340 "traddr": "10.0.0.1", 00:15:04.340 "trsvcid": "52222" 00:15:04.340 }, 00:15:04.340 "auth": { 00:15:04.340 "state": "completed", 00:15:04.340 "digest": "sha256", 00:15:04.340 "dhgroup": "null" 00:15:04.340 } 00:15:04.340 } 00:15:04.340 ]' 00:15:04.340 07:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:04.340 07:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:04.340 07:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:04.340 07:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:04.340 07:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:04.599 07:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.599 07:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.599 07:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:04.599 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzBlYzBkNmMzN2RlYTJkNDgzMjUyYzU1ZDA0NDFjZTY5MjhlNmU4N2VhZmRhYTZjlv9xVQ==: --dhchap-ctrl-secret DHHC-1:03:ODkzZTg5ODQ0MzllMDlkOWZhNmM2ZDgwNjlkYWVmNGQ5ZjIzZWMxYmZiYmI0YmIxYjI5NGRjOWQyNjQzMDVlMmvFeQQ=: 00:15:04.599 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzBlYzBkNmMzN2RlYTJkNDgzMjUyYzU1ZDA0NDFjZTY5MjhlNmU4N2VhZmRhYTZjlv9xVQ==: --dhchap-ctrl-secret DHHC-1:03:ODkzZTg5ODQ0MzllMDlkOWZhNmM2ZDgwNjlkYWVmNGQ5ZjIzZWMxYmZiYmI0YmIxYjI5NGRjOWQyNjQzMDVlMmvFeQQ=: 00:15:05.165 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.165 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:05.165 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.165 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.165 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.166 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:05.166 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:05.166 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:05.424 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:15:05.424 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:05.424 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:05.424 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:05.424 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:05.424 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.424 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.424 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.424 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.424 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.424 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.424 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.425 07:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.682 00:15:05.682 07:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:05.682 07:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:05.682 07:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.940 07:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.940 07:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.940 07:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.940 07:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.940 07:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.940 07:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:05.940 { 00:15:05.940 "cntlid": 3, 00:15:05.940 "qid": 0, 00:15:05.940 "state": "enabled", 00:15:05.940 "thread": "nvmf_tgt_poll_group_000", 00:15:05.940 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:05.940 "listen_address": { 00:15:05.940 "trtype": "TCP", 00:15:05.940 "adrfam": "IPv4", 00:15:05.940 "traddr": "10.0.0.2", 00:15:05.940 "trsvcid": "4420" 00:15:05.940 }, 00:15:05.940 "peer_address": { 00:15:05.940 "trtype": "TCP", 00:15:05.940 "adrfam": "IPv4", 00:15:05.940 "traddr": "10.0.0.1", 00:15:05.940 "trsvcid": "52250" 00:15:05.940 }, 00:15:05.940 "auth": { 00:15:05.940 "state": "completed", 00:15:05.940 "digest": "sha256", 00:15:05.941 "dhgroup": "null" 00:15:05.941 } 00:15:05.941 } 00:15:05.941 ]' 00:15:05.941 07:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:05.941 07:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:05.941 07:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:05.941 07:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:05.941 07:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:06.199 07:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.199 07:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.199 07:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.199 07:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDBkMWVlYWU0ZjI0YTQ1MjFiMzdlZTIwNDBjYWQzYznoBVG8: --dhchap-ctrl-secret DHHC-1:02:OTBiYzU5ZDljZDkxY2Q0MDFjZDU4OGY0ZDBlMTY0YzU5YzM5NzkxNmVkMzkzNzk4tQX+Ug==: 00:15:06.199 07:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDBkMWVlYWU0ZjI0YTQ1MjFiMzdlZTIwNDBjYWQzYznoBVG8: --dhchap-ctrl-secret DHHC-1:02:OTBiYzU5ZDljZDkxY2Q0MDFjZDU4OGY0ZDBlMTY0YzU5YzM5NzkxNmVkMzkzNzk4tQX+Ug==: 00:15:06.766 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.766 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:06.766 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.766 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.766 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.766 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:06.766 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:06.766 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:07.025 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:15:07.025 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:07.025 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:07.025 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:07.025 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:07.025 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.025 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.025 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.025 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.025 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.025 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.025 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.025 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.283 00:15:07.283 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:07.283 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:07.283 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.542 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.542 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.542 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.542 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.542 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.542 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:07.542 { 00:15:07.542 "cntlid": 5, 00:15:07.542 "qid": 0, 00:15:07.542 "state": "enabled", 00:15:07.542 "thread": "nvmf_tgt_poll_group_000", 00:15:07.542 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:07.542 "listen_address": { 00:15:07.542 "trtype": "TCP", 00:15:07.542 "adrfam": "IPv4", 00:15:07.542 "traddr": "10.0.0.2", 00:15:07.542 "trsvcid": "4420" 00:15:07.542 }, 00:15:07.542 "peer_address": { 00:15:07.542 "trtype": "TCP", 00:15:07.542 "adrfam": "IPv4", 00:15:07.542 "traddr": "10.0.0.1", 00:15:07.542 "trsvcid": "52280" 00:15:07.542 }, 00:15:07.542 "auth": { 00:15:07.542 "state": "completed", 00:15:07.542 "digest": "sha256", 00:15:07.542 "dhgroup": "null" 00:15:07.542 } 00:15:07.542 } 00:15:07.542 ]' 00:15:07.542 07:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:07.542 07:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:07.542 07:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:07.542 07:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:07.542 07:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:07.542 07:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.542 07:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.542 07:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.801 07:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTU2NGU3ZmU2MzEwYzMzMjBiZTFkMTQ3YWE4NGRjMDFmYjI0MmQxNGIzNzY1NDJl8zg9sg==: --dhchap-ctrl-secret DHHC-1:01:MGZjZTY1ZTQ1MjYwODZkNmRlMjc0NjRiZGU4MzllNWGLuOVs: 00:15:07.801 07:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTU2NGU3ZmU2MzEwYzMzMjBiZTFkMTQ3YWE4NGRjMDFmYjI0MmQxNGIzNzY1NDJl8zg9sg==: --dhchap-ctrl-secret DHHC-1:01:MGZjZTY1ZTQ1MjYwODZkNmRlMjc0NjRiZGU4MzllNWGLuOVs: 00:15:08.368 07:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.368 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.368 07:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:08.368 07:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.368 07:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.368 07:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.368 07:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:08.368 07:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:08.368 07:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:08.627 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:15:08.627 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:08.627 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:08.627 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:08.627 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:08.627 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.627 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:08.627 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.627 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.627 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.627 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:08.627 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:08.627 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:08.885 00:15:08.885 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:08.885 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:08.885 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.144 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.145 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.145 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.145 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.145 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.145 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:09.145 { 00:15:09.145 "cntlid": 7, 00:15:09.145 "qid": 0, 00:15:09.145 "state": "enabled", 00:15:09.145 "thread": "nvmf_tgt_poll_group_000", 00:15:09.145 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:09.145 "listen_address": { 00:15:09.145 "trtype": "TCP", 00:15:09.145 "adrfam": "IPv4", 00:15:09.145 "traddr": "10.0.0.2", 00:15:09.145 "trsvcid": "4420" 00:15:09.145 }, 00:15:09.145 "peer_address": { 00:15:09.145 "trtype": "TCP", 00:15:09.145 "adrfam": "IPv4", 00:15:09.145 "traddr": "10.0.0.1", 00:15:09.145 "trsvcid": "52308" 00:15:09.145 }, 00:15:09.145 "auth": { 00:15:09.145 "state": "completed", 00:15:09.145 "digest": "sha256", 00:15:09.145 "dhgroup": "null" 00:15:09.145 } 00:15:09.145 } 00:15:09.145 ]' 00:15:09.145 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:09.145 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:09.145 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:09.145 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:09.145 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:09.145 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.145 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.145 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.403 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGM4MmQ3NGU2MzQ2YmQ2NWNlY2YyY2MxM2RlOTRmNzE1OTZlYzAxYzFiNzg2NDNiNjgwNjRkYzI0NmVhZjRkOULdqeA=: 00:15:09.403 07:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGM4MmQ3NGU2MzQ2YmQ2NWNlY2YyY2MxM2RlOTRmNzE1OTZlYzAxYzFiNzg2NDNiNjgwNjRkYzI0NmVhZjRkOULdqeA=: 00:15:09.971 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.971 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.971 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:09.971 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.971 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.971 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.971 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:09.971 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:09.971 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:09.971 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:10.230 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:15:10.230 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:10.230 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:10.230 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:10.230 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:10.230 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.230 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.230 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.230 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.230 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.230 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.230 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.230 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.489 00:15:10.489 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:10.489 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:10.489 07:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.747 07:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.747 07:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.747 07:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.747 07:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.748 07:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.748 07:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:10.748 { 00:15:10.748 "cntlid": 9, 00:15:10.748 "qid": 0, 00:15:10.748 "state": "enabled", 00:15:10.748 "thread": "nvmf_tgt_poll_group_000", 00:15:10.748 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:10.748 "listen_address": { 00:15:10.748 "trtype": "TCP", 00:15:10.748 "adrfam": "IPv4", 00:15:10.748 "traddr": "10.0.0.2", 00:15:10.748 "trsvcid": "4420" 00:15:10.748 }, 00:15:10.748 "peer_address": { 00:15:10.748 "trtype": "TCP", 00:15:10.748 "adrfam": "IPv4", 00:15:10.748 "traddr": "10.0.0.1", 00:15:10.748 "trsvcid": "36542" 00:15:10.748 }, 00:15:10.748 "auth": { 00:15:10.748 "state": "completed", 00:15:10.748 "digest": "sha256", 00:15:10.748 "dhgroup": "ffdhe2048" 00:15:10.748 } 00:15:10.748 } 00:15:10.748 ]' 00:15:10.748 07:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:10.748 07:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:10.748 07:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:10.748 07:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:10.748 07:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:10.748 07:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.748 07:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.748 07:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.006 07:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzBlYzBkNmMzN2RlYTJkNDgzMjUyYzU1ZDA0NDFjZTY5MjhlNmU4N2VhZmRhYTZjlv9xVQ==: --dhchap-ctrl-secret DHHC-1:03:ODkzZTg5ODQ0MzllMDlkOWZhNmM2ZDgwNjlkYWVmNGQ5ZjIzZWMxYmZiYmI0YmIxYjI5NGRjOWQyNjQzMDVlMmvFeQQ=: 00:15:11.006 07:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzBlYzBkNmMzN2RlYTJkNDgzMjUyYzU1ZDA0NDFjZTY5MjhlNmU4N2VhZmRhYTZjlv9xVQ==: --dhchap-ctrl-secret DHHC-1:03:ODkzZTg5ODQ0MzllMDlkOWZhNmM2ZDgwNjlkYWVmNGQ5ZjIzZWMxYmZiYmI0YmIxYjI5NGRjOWQyNjQzMDVlMmvFeQQ=: 00:15:11.573 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.573 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.573 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:11.573 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.573 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.573 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.573 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:11.573 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:11.573 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:11.831 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:15:11.831 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:11.831 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:11.831 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:11.831 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:11.831 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.831 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.831 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.831 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.831 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.831 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.831 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.831 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.089 00:15:12.089 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:12.089 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:12.089 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.348 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.348 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.348 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.348 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.348 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.348 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:12.348 { 00:15:12.348 "cntlid": 11, 00:15:12.348 "qid": 0, 00:15:12.348 "state": "enabled", 00:15:12.348 "thread": "nvmf_tgt_poll_group_000", 00:15:12.348 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:12.348 "listen_address": { 00:15:12.348 "trtype": "TCP", 00:15:12.348 "adrfam": "IPv4", 00:15:12.348 "traddr": "10.0.0.2", 00:15:12.348 "trsvcid": "4420" 00:15:12.348 }, 00:15:12.348 "peer_address": { 00:15:12.348 "trtype": "TCP", 00:15:12.348 "adrfam": "IPv4", 00:15:12.348 "traddr": "10.0.0.1", 00:15:12.348 "trsvcid": "36570" 00:15:12.348 }, 00:15:12.348 "auth": { 00:15:12.348 "state": "completed", 00:15:12.348 "digest": "sha256", 00:15:12.348 "dhgroup": "ffdhe2048" 00:15:12.348 } 00:15:12.348 } 00:15:12.348 ]' 00:15:12.348 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:12.348 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:12.348 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:12.348 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:12.348 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:12.348 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.348 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.348 07:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.606 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDBkMWVlYWU0ZjI0YTQ1MjFiMzdlZTIwNDBjYWQzYznoBVG8: --dhchap-ctrl-secret DHHC-1:02:OTBiYzU5ZDljZDkxY2Q0MDFjZDU4OGY0ZDBlMTY0YzU5YzM5NzkxNmVkMzkzNzk4tQX+Ug==: 00:15:12.606 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDBkMWVlYWU0ZjI0YTQ1MjFiMzdlZTIwNDBjYWQzYznoBVG8: --dhchap-ctrl-secret DHHC-1:02:OTBiYzU5ZDljZDkxY2Q0MDFjZDU4OGY0ZDBlMTY0YzU5YzM5NzkxNmVkMzkzNzk4tQX+Ug==: 00:15:13.172 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.172 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:13.172 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.172 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.172 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.172 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:13.172 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:13.172 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:13.430 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:15:13.430 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:13.430 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:13.430 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:13.430 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:13.430 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.430 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.430 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.430 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.430 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.430 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.430 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.430 07:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.688 00:15:13.688 07:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:13.688 07:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:13.688 07:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.947 07:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.947 07:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.947 07:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.947 07:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.947 07:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.947 07:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:13.947 { 00:15:13.947 "cntlid": 13, 00:15:13.947 "qid": 0, 00:15:13.947 "state": "enabled", 00:15:13.947 "thread": "nvmf_tgt_poll_group_000", 00:15:13.947 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:13.947 "listen_address": { 00:15:13.947 "trtype": "TCP", 00:15:13.947 "adrfam": "IPv4", 00:15:13.947 "traddr": "10.0.0.2", 00:15:13.947 "trsvcid": "4420" 00:15:13.947 }, 00:15:13.947 "peer_address": { 00:15:13.947 "trtype": "TCP", 00:15:13.947 "adrfam": "IPv4", 00:15:13.947 "traddr": "10.0.0.1", 00:15:13.947 "trsvcid": "36596" 00:15:13.947 }, 00:15:13.947 "auth": { 00:15:13.947 "state": "completed", 00:15:13.947 "digest": "sha256", 00:15:13.947 "dhgroup": "ffdhe2048" 00:15:13.947 } 00:15:13.947 } 00:15:13.947 ]' 00:15:13.947 07:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:13.947 07:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:13.947 07:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:13.947 07:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:13.947 07:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:13.947 07:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.947 07:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.947 07:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.207 07:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTU2NGU3ZmU2MzEwYzMzMjBiZTFkMTQ3YWE4NGRjMDFmYjI0MmQxNGIzNzY1NDJl8zg9sg==: --dhchap-ctrl-secret DHHC-1:01:MGZjZTY1ZTQ1MjYwODZkNmRlMjc0NjRiZGU4MzllNWGLuOVs: 00:15:14.207 07:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTU2NGU3ZmU2MzEwYzMzMjBiZTFkMTQ3YWE4NGRjMDFmYjI0MmQxNGIzNzY1NDJl8zg9sg==: --dhchap-ctrl-secret DHHC-1:01:MGZjZTY1ZTQ1MjYwODZkNmRlMjc0NjRiZGU4MzllNWGLuOVs: 00:15:14.775 07:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.775 07:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:14.775 07:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.775 07:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.775 07:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.775 07:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:14.775 07:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:14.775 07:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:15.032 07:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:15.032 07:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:15.032 07:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:15.032 07:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:15.032 07:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:15.032 07:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.032 07:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:15.032 07:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.032 07:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.032 07:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.032 07:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:15.032 07:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:15.032 07:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:15.291 00:15:15.291 07:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:15.291 07:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:15.291 07:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.550 07:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.550 07:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.550 07:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.550 07:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.550 07:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.550 07:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:15.550 { 00:15:15.550 "cntlid": 15, 00:15:15.550 "qid": 0, 00:15:15.550 "state": "enabled", 00:15:15.550 "thread": "nvmf_tgt_poll_group_000", 00:15:15.550 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:15.550 "listen_address": { 00:15:15.550 "trtype": "TCP", 00:15:15.550 "adrfam": "IPv4", 00:15:15.550 "traddr": "10.0.0.2", 00:15:15.550 "trsvcid": "4420" 00:15:15.550 }, 00:15:15.550 "peer_address": { 00:15:15.550 "trtype": "TCP", 00:15:15.550 "adrfam": "IPv4", 00:15:15.550 "traddr": "10.0.0.1", 00:15:15.550 "trsvcid": "36624" 00:15:15.550 }, 00:15:15.550 "auth": { 00:15:15.550 "state": "completed", 00:15:15.550 "digest": "sha256", 00:15:15.550 "dhgroup": "ffdhe2048" 00:15:15.550 } 00:15:15.550 } 00:15:15.550 ]' 00:15:15.550 07:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:15.550 07:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:15.550 07:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:15.550 07:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:15.550 07:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:15.550 07:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.550 07:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.550 07:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.808 07:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGM4MmQ3NGU2MzQ2YmQ2NWNlY2YyY2MxM2RlOTRmNzE1OTZlYzAxYzFiNzg2NDNiNjgwNjRkYzI0NmVhZjRkOULdqeA=: 00:15:15.808 07:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGM4MmQ3NGU2MzQ2YmQ2NWNlY2YyY2MxM2RlOTRmNzE1OTZlYzAxYzFiNzg2NDNiNjgwNjRkYzI0NmVhZjRkOULdqeA=: 00:15:16.377 07:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.377 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.377 07:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:16.377 07:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.377 07:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.377 07:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.377 07:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:16.377 07:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:16.377 07:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:16.377 07:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:16.636 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:16.636 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:16.636 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:16.636 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:16.636 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:16.636 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.636 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.636 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.636 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.636 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.636 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.636 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.636 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.894 00:15:16.894 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:16.894 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:16.894 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.153 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.153 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.153 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.153 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.153 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.153 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:17.153 { 00:15:17.153 "cntlid": 17, 00:15:17.153 "qid": 0, 00:15:17.153 "state": "enabled", 00:15:17.153 "thread": "nvmf_tgt_poll_group_000", 00:15:17.153 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:17.153 "listen_address": { 00:15:17.153 "trtype": "TCP", 00:15:17.153 "adrfam": "IPv4", 00:15:17.153 "traddr": "10.0.0.2", 00:15:17.153 "trsvcid": "4420" 00:15:17.153 }, 00:15:17.153 "peer_address": { 00:15:17.153 "trtype": "TCP", 00:15:17.153 "adrfam": "IPv4", 00:15:17.153 "traddr": "10.0.0.1", 00:15:17.153 "trsvcid": "36640" 00:15:17.153 }, 00:15:17.153 "auth": { 00:15:17.153 "state": "completed", 00:15:17.153 "digest": "sha256", 00:15:17.153 "dhgroup": "ffdhe3072" 00:15:17.153 } 00:15:17.153 } 00:15:17.153 ]' 00:15:17.153 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:17.153 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:17.153 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:17.153 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:17.153 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:17.153 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.153 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.153 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.410 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzBlYzBkNmMzN2RlYTJkNDgzMjUyYzU1ZDA0NDFjZTY5MjhlNmU4N2VhZmRhYTZjlv9xVQ==: --dhchap-ctrl-secret DHHC-1:03:ODkzZTg5ODQ0MzllMDlkOWZhNmM2ZDgwNjlkYWVmNGQ5ZjIzZWMxYmZiYmI0YmIxYjI5NGRjOWQyNjQzMDVlMmvFeQQ=: 00:15:17.410 07:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzBlYzBkNmMzN2RlYTJkNDgzMjUyYzU1ZDA0NDFjZTY5MjhlNmU4N2VhZmRhYTZjlv9xVQ==: --dhchap-ctrl-secret DHHC-1:03:ODkzZTg5ODQ0MzllMDlkOWZhNmM2ZDgwNjlkYWVmNGQ5ZjIzZWMxYmZiYmI0YmIxYjI5NGRjOWQyNjQzMDVlMmvFeQQ=: 00:15:17.976 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.976 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.976 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:17.976 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.976 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.976 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.976 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:17.976 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:17.976 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:18.235 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:15:18.235 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:18.235 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:18.236 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:18.236 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:18.236 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.236 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.236 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.236 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.236 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.236 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.236 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.236 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.494 00:15:18.494 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:18.494 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:18.494 07:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.752 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.752 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.752 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.752 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.752 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.752 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:18.752 { 00:15:18.752 "cntlid": 19, 00:15:18.752 "qid": 0, 00:15:18.752 "state": "enabled", 00:15:18.752 "thread": "nvmf_tgt_poll_group_000", 00:15:18.752 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:18.752 "listen_address": { 00:15:18.752 "trtype": "TCP", 00:15:18.752 "adrfam": "IPv4", 00:15:18.752 "traddr": "10.0.0.2", 00:15:18.752 "trsvcid": "4420" 00:15:18.752 }, 00:15:18.752 "peer_address": { 00:15:18.752 "trtype": "TCP", 00:15:18.752 "adrfam": "IPv4", 00:15:18.752 "traddr": "10.0.0.1", 00:15:18.752 "trsvcid": "36664" 00:15:18.752 }, 00:15:18.752 "auth": { 00:15:18.752 "state": "completed", 00:15:18.752 "digest": "sha256", 00:15:18.752 "dhgroup": "ffdhe3072" 00:15:18.752 } 00:15:18.752 } 00:15:18.752 ]' 00:15:18.752 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:18.752 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:18.752 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:18.752 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:18.752 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:18.752 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.752 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.752 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.011 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDBkMWVlYWU0ZjI0YTQ1MjFiMzdlZTIwNDBjYWQzYznoBVG8: --dhchap-ctrl-secret DHHC-1:02:OTBiYzU5ZDljZDkxY2Q0MDFjZDU4OGY0ZDBlMTY0YzU5YzM5NzkxNmVkMzkzNzk4tQX+Ug==: 00:15:19.011 07:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDBkMWVlYWU0ZjI0YTQ1MjFiMzdlZTIwNDBjYWQzYznoBVG8: --dhchap-ctrl-secret DHHC-1:02:OTBiYzU5ZDljZDkxY2Q0MDFjZDU4OGY0ZDBlMTY0YzU5YzM5NzkxNmVkMzkzNzk4tQX+Ug==: 00:15:19.578 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.578 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:19.578 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.578 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.578 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.578 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:19.578 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:19.578 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:19.837 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:15:19.837 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:19.837 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:19.837 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:19.837 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:19.837 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.837 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.837 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.837 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.837 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.837 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.837 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.837 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.096 00:15:20.096 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:20.096 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.096 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:20.353 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.353 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.353 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.353 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.353 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.353 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:20.353 { 00:15:20.353 "cntlid": 21, 00:15:20.353 "qid": 0, 00:15:20.353 "state": "enabled", 00:15:20.353 "thread": "nvmf_tgt_poll_group_000", 00:15:20.353 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:20.353 "listen_address": { 00:15:20.353 "trtype": "TCP", 00:15:20.353 "adrfam": "IPv4", 00:15:20.353 "traddr": "10.0.0.2", 00:15:20.353 "trsvcid": "4420" 00:15:20.353 }, 00:15:20.353 "peer_address": { 00:15:20.353 "trtype": "TCP", 00:15:20.353 "adrfam": "IPv4", 00:15:20.353 "traddr": "10.0.0.1", 00:15:20.353 "trsvcid": "34602" 00:15:20.353 }, 00:15:20.353 "auth": { 00:15:20.353 "state": "completed", 00:15:20.353 "digest": "sha256", 00:15:20.353 "dhgroup": "ffdhe3072" 00:15:20.353 } 00:15:20.353 } 00:15:20.353 ]' 00:15:20.353 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:20.353 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:20.353 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:20.353 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:20.353 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:20.353 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.353 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.354 07:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.614 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTU2NGU3ZmU2MzEwYzMzMjBiZTFkMTQ3YWE4NGRjMDFmYjI0MmQxNGIzNzY1NDJl8zg9sg==: --dhchap-ctrl-secret DHHC-1:01:MGZjZTY1ZTQ1MjYwODZkNmRlMjc0NjRiZGU4MzllNWGLuOVs: 00:15:20.614 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTU2NGU3ZmU2MzEwYzMzMjBiZTFkMTQ3YWE4NGRjMDFmYjI0MmQxNGIzNzY1NDJl8zg9sg==: --dhchap-ctrl-secret DHHC-1:01:MGZjZTY1ZTQ1MjYwODZkNmRlMjc0NjRiZGU4MzllNWGLuOVs: 00:15:21.276 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.276 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.276 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:21.276 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.276 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.276 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.276 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:21.276 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:21.276 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:21.534 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:15:21.534 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:21.534 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:21.534 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:21.534 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:21.534 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.534 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:21.534 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.534 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.534 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.534 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:21.534 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:21.534 07:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:21.534 00:15:21.794 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:21.794 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:21.794 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.794 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.794 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.794 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.794 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.794 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.794 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:21.794 { 00:15:21.794 "cntlid": 23, 00:15:21.794 "qid": 0, 00:15:21.794 "state": "enabled", 00:15:21.794 "thread": "nvmf_tgt_poll_group_000", 00:15:21.794 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:21.794 "listen_address": { 00:15:21.794 "trtype": "TCP", 00:15:21.794 "adrfam": "IPv4", 00:15:21.794 "traddr": "10.0.0.2", 00:15:21.794 "trsvcid": "4420" 00:15:21.794 }, 00:15:21.794 "peer_address": { 00:15:21.794 "trtype": "TCP", 00:15:21.794 "adrfam": "IPv4", 00:15:21.794 "traddr": "10.0.0.1", 00:15:21.794 "trsvcid": "34644" 00:15:21.794 }, 00:15:21.794 "auth": { 00:15:21.794 "state": "completed", 00:15:21.794 "digest": "sha256", 00:15:21.794 "dhgroup": "ffdhe3072" 00:15:21.794 } 00:15:21.794 } 00:15:21.794 ]' 00:15:21.794 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.053 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:22.053 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.053 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:22.053 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:22.053 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.053 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.053 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.312 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGM4MmQ3NGU2MzQ2YmQ2NWNlY2YyY2MxM2RlOTRmNzE1OTZlYzAxYzFiNzg2NDNiNjgwNjRkYzI0NmVhZjRkOULdqeA=: 00:15:22.312 07:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGM4MmQ3NGU2MzQ2YmQ2NWNlY2YyY2MxM2RlOTRmNzE1OTZlYzAxYzFiNzg2NDNiNjgwNjRkYzI0NmVhZjRkOULdqeA=: 00:15:22.880 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.880 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:22.880 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.880 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.880 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.880 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:22.880 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:22.880 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:22.880 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:22.880 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:15:22.880 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:22.880 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:22.880 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:22.880 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:22.880 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.880 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.880 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.880 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.880 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.880 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.880 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.880 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:23.448 00:15:23.448 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:23.448 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:23.448 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.448 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.448 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.448 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.448 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.448 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.448 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:23.448 { 00:15:23.448 "cntlid": 25, 00:15:23.448 "qid": 0, 00:15:23.448 "state": "enabled", 00:15:23.448 "thread": "nvmf_tgt_poll_group_000", 00:15:23.448 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:23.448 "listen_address": { 00:15:23.448 "trtype": "TCP", 00:15:23.448 "adrfam": "IPv4", 00:15:23.448 "traddr": "10.0.0.2", 00:15:23.448 "trsvcid": "4420" 00:15:23.448 }, 00:15:23.448 "peer_address": { 00:15:23.448 "trtype": "TCP", 00:15:23.448 "adrfam": "IPv4", 00:15:23.448 "traddr": "10.0.0.1", 00:15:23.448 "trsvcid": "34664" 00:15:23.448 }, 00:15:23.448 "auth": { 00:15:23.448 "state": "completed", 00:15:23.448 "digest": "sha256", 00:15:23.448 "dhgroup": "ffdhe4096" 00:15:23.448 } 00:15:23.448 } 00:15:23.448 ]' 00:15:23.448 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:23.448 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:23.448 07:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:23.707 07:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:23.707 07:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:23.707 07:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.707 07:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.707 07:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.707 07:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzBlYzBkNmMzN2RlYTJkNDgzMjUyYzU1ZDA0NDFjZTY5MjhlNmU4N2VhZmRhYTZjlv9xVQ==: --dhchap-ctrl-secret DHHC-1:03:ODkzZTg5ODQ0MzllMDlkOWZhNmM2ZDgwNjlkYWVmNGQ5ZjIzZWMxYmZiYmI0YmIxYjI5NGRjOWQyNjQzMDVlMmvFeQQ=: 00:15:23.707 07:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzBlYzBkNmMzN2RlYTJkNDgzMjUyYzU1ZDA0NDFjZTY5MjhlNmU4N2VhZmRhYTZjlv9xVQ==: --dhchap-ctrl-secret DHHC-1:03:ODkzZTg5ODQ0MzllMDlkOWZhNmM2ZDgwNjlkYWVmNGQ5ZjIzZWMxYmZiYmI0YmIxYjI5NGRjOWQyNjQzMDVlMmvFeQQ=: 00:15:24.275 07:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.275 07:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:24.275 07:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.275 07:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.534 07:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.534 07:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:24.534 07:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:24.534 07:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:24.534 07:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:24.534 07:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:24.534 07:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:24.534 07:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:24.534 07:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:24.534 07:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.534 07:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.534 07:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.534 07:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.534 07:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.534 07:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.534 07:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.534 07:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.793 00:15:25.051 07:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:25.051 07:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:25.051 07:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.051 07:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.052 07:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.052 07:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.052 07:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.052 07:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.052 07:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:25.052 { 00:15:25.052 "cntlid": 27, 00:15:25.052 "qid": 0, 00:15:25.052 "state": "enabled", 00:15:25.052 "thread": "nvmf_tgt_poll_group_000", 00:15:25.052 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:25.052 "listen_address": { 00:15:25.052 "trtype": "TCP", 00:15:25.052 "adrfam": "IPv4", 00:15:25.052 "traddr": "10.0.0.2", 00:15:25.052 "trsvcid": "4420" 00:15:25.052 }, 00:15:25.052 "peer_address": { 00:15:25.052 "trtype": "TCP", 00:15:25.052 "adrfam": "IPv4", 00:15:25.052 "traddr": "10.0.0.1", 00:15:25.052 "trsvcid": "34694" 00:15:25.052 }, 00:15:25.052 "auth": { 00:15:25.052 "state": "completed", 00:15:25.052 "digest": "sha256", 00:15:25.052 "dhgroup": "ffdhe4096" 00:15:25.052 } 00:15:25.052 } 00:15:25.052 ]' 00:15:25.052 07:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:25.052 07:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:25.052 07:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:25.310 07:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:25.311 07:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:25.311 07:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.311 07:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.311 07:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.569 07:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDBkMWVlYWU0ZjI0YTQ1MjFiMzdlZTIwNDBjYWQzYznoBVG8: --dhchap-ctrl-secret DHHC-1:02:OTBiYzU5ZDljZDkxY2Q0MDFjZDU4OGY0ZDBlMTY0YzU5YzM5NzkxNmVkMzkzNzk4tQX+Ug==: 00:15:25.570 07:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDBkMWVlYWU0ZjI0YTQ1MjFiMzdlZTIwNDBjYWQzYznoBVG8: --dhchap-ctrl-secret DHHC-1:02:OTBiYzU5ZDljZDkxY2Q0MDFjZDU4OGY0ZDBlMTY0YzU5YzM5NzkxNmVkMzkzNzk4tQX+Ug==: 00:15:26.137 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.137 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:26.137 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.137 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.137 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.137 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:26.137 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:26.137 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:26.137 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:26.137 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:26.137 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:26.137 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:26.137 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:26.137 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.137 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:26.137 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.137 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.396 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.396 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:26.396 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:26.396 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:26.655 00:15:26.655 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:26.655 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.655 07:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.655 07:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.655 07:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.655 07:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.655 07:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.655 07:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.915 07:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:26.915 { 00:15:26.915 "cntlid": 29, 00:15:26.915 "qid": 0, 00:15:26.915 "state": "enabled", 00:15:26.915 "thread": "nvmf_tgt_poll_group_000", 00:15:26.915 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:26.915 "listen_address": { 00:15:26.915 "trtype": "TCP", 00:15:26.915 "adrfam": "IPv4", 00:15:26.915 "traddr": "10.0.0.2", 00:15:26.915 "trsvcid": "4420" 00:15:26.915 }, 00:15:26.915 "peer_address": { 00:15:26.915 "trtype": "TCP", 00:15:26.915 "adrfam": "IPv4", 00:15:26.915 "traddr": "10.0.0.1", 00:15:26.915 "trsvcid": "34726" 00:15:26.915 }, 00:15:26.915 "auth": { 00:15:26.915 "state": "completed", 00:15:26.915 "digest": "sha256", 00:15:26.915 "dhgroup": "ffdhe4096" 00:15:26.915 } 00:15:26.915 } 00:15:26.915 ]' 00:15:26.915 07:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:26.915 07:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:26.915 07:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:26.915 07:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:26.915 07:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:26.915 07:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.915 07:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.915 07:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.174 07:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTU2NGU3ZmU2MzEwYzMzMjBiZTFkMTQ3YWE4NGRjMDFmYjI0MmQxNGIzNzY1NDJl8zg9sg==: --dhchap-ctrl-secret DHHC-1:01:MGZjZTY1ZTQ1MjYwODZkNmRlMjc0NjRiZGU4MzllNWGLuOVs: 00:15:27.174 07:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTU2NGU3ZmU2MzEwYzMzMjBiZTFkMTQ3YWE4NGRjMDFmYjI0MmQxNGIzNzY1NDJl8zg9sg==: --dhchap-ctrl-secret DHHC-1:01:MGZjZTY1ZTQ1MjYwODZkNmRlMjc0NjRiZGU4MzllNWGLuOVs: 00:15:27.742 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.742 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.742 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:27.742 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.742 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.742 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.742 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:27.742 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:27.742 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:28.002 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:15:28.002 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:28.002 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:28.002 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:28.002 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:28.002 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.002 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:28.002 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.002 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.002 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.002 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:28.002 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:28.002 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:28.262 00:15:28.262 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:28.262 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:28.262 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.262 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.262 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.262 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.262 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.521 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.521 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:28.521 { 00:15:28.521 "cntlid": 31, 00:15:28.521 "qid": 0, 00:15:28.521 "state": "enabled", 00:15:28.521 "thread": "nvmf_tgt_poll_group_000", 00:15:28.521 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:28.521 "listen_address": { 00:15:28.521 "trtype": "TCP", 00:15:28.521 "adrfam": "IPv4", 00:15:28.521 "traddr": "10.0.0.2", 00:15:28.521 "trsvcid": "4420" 00:15:28.521 }, 00:15:28.521 "peer_address": { 00:15:28.521 "trtype": "TCP", 00:15:28.521 "adrfam": "IPv4", 00:15:28.521 "traddr": "10.0.0.1", 00:15:28.521 "trsvcid": "34762" 00:15:28.521 }, 00:15:28.521 "auth": { 00:15:28.521 "state": "completed", 00:15:28.521 "digest": "sha256", 00:15:28.521 "dhgroup": "ffdhe4096" 00:15:28.521 } 00:15:28.521 } 00:15:28.521 ]' 00:15:28.521 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:28.521 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:28.521 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:28.521 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:28.522 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:28.522 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.522 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.522 07:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.781 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGM4MmQ3NGU2MzQ2YmQ2NWNlY2YyY2MxM2RlOTRmNzE1OTZlYzAxYzFiNzg2NDNiNjgwNjRkYzI0NmVhZjRkOULdqeA=: 00:15:28.781 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGM4MmQ3NGU2MzQ2YmQ2NWNlY2YyY2MxM2RlOTRmNzE1OTZlYzAxYzFiNzg2NDNiNjgwNjRkYzI0NmVhZjRkOULdqeA=: 00:15:29.349 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.349 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.349 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:29.349 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.349 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.349 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.349 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:29.349 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:29.349 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:29.349 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:29.608 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:15:29.608 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:29.608 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:29.608 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:29.608 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:29.608 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.608 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:29.608 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.608 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.609 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.609 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:29.609 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:29.609 07:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:29.868 00:15:29.868 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:29.868 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:29.868 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.128 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.128 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.128 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.128 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.128 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.128 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:30.128 { 00:15:30.128 "cntlid": 33, 00:15:30.128 "qid": 0, 00:15:30.128 "state": "enabled", 00:15:30.128 "thread": "nvmf_tgt_poll_group_000", 00:15:30.128 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:30.129 "listen_address": { 00:15:30.129 "trtype": "TCP", 00:15:30.129 "adrfam": "IPv4", 00:15:30.129 "traddr": "10.0.0.2", 00:15:30.129 "trsvcid": "4420" 00:15:30.129 }, 00:15:30.129 "peer_address": { 00:15:30.129 "trtype": "TCP", 00:15:30.129 "adrfam": "IPv4", 00:15:30.129 "traddr": "10.0.0.1", 00:15:30.129 "trsvcid": "55208" 00:15:30.129 }, 00:15:30.129 "auth": { 00:15:30.129 "state": "completed", 00:15:30.129 "digest": "sha256", 00:15:30.129 "dhgroup": "ffdhe6144" 00:15:30.129 } 00:15:30.129 } 00:15:30.129 ]' 00:15:30.129 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:30.129 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:30.129 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:30.129 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:30.129 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:30.129 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.129 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.129 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.388 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzBlYzBkNmMzN2RlYTJkNDgzMjUyYzU1ZDA0NDFjZTY5MjhlNmU4N2VhZmRhYTZjlv9xVQ==: --dhchap-ctrl-secret DHHC-1:03:ODkzZTg5ODQ0MzllMDlkOWZhNmM2ZDgwNjlkYWVmNGQ5ZjIzZWMxYmZiYmI0YmIxYjI5NGRjOWQyNjQzMDVlMmvFeQQ=: 00:15:30.388 07:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzBlYzBkNmMzN2RlYTJkNDgzMjUyYzU1ZDA0NDFjZTY5MjhlNmU4N2VhZmRhYTZjlv9xVQ==: --dhchap-ctrl-secret DHHC-1:03:ODkzZTg5ODQ0MzllMDlkOWZhNmM2ZDgwNjlkYWVmNGQ5ZjIzZWMxYmZiYmI0YmIxYjI5NGRjOWQyNjQzMDVlMmvFeQQ=: 00:15:30.956 07:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.956 07:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:30.956 07:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.956 07:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.956 07:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.956 07:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:30.956 07:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:30.956 07:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:31.215 07:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:15:31.215 07:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:31.215 07:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:31.215 07:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:31.215 07:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:31.215 07:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.215 07:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:31.216 07:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.216 07:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.216 07:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.216 07:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:31.216 07:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:31.216 07:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:31.475 00:15:31.475 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:31.475 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:31.475 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.734 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.734 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.734 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.734 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.734 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.734 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:31.734 { 00:15:31.734 "cntlid": 35, 00:15:31.734 "qid": 0, 00:15:31.734 "state": "enabled", 00:15:31.734 "thread": "nvmf_tgt_poll_group_000", 00:15:31.734 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:31.734 "listen_address": { 00:15:31.734 "trtype": "TCP", 00:15:31.734 "adrfam": "IPv4", 00:15:31.734 "traddr": "10.0.0.2", 00:15:31.734 "trsvcid": "4420" 00:15:31.734 }, 00:15:31.734 "peer_address": { 00:15:31.734 "trtype": "TCP", 00:15:31.734 "adrfam": "IPv4", 00:15:31.734 "traddr": "10.0.0.1", 00:15:31.734 "trsvcid": "55228" 00:15:31.734 }, 00:15:31.734 "auth": { 00:15:31.734 "state": "completed", 00:15:31.734 "digest": "sha256", 00:15:31.734 "dhgroup": "ffdhe6144" 00:15:31.734 } 00:15:31.734 } 00:15:31.734 ]' 00:15:31.734 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:31.734 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:31.734 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:31.993 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:31.993 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:31.993 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.993 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.993 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.993 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDBkMWVlYWU0ZjI0YTQ1MjFiMzdlZTIwNDBjYWQzYznoBVG8: --dhchap-ctrl-secret DHHC-1:02:OTBiYzU5ZDljZDkxY2Q0MDFjZDU4OGY0ZDBlMTY0YzU5YzM5NzkxNmVkMzkzNzk4tQX+Ug==: 00:15:31.993 07:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDBkMWVlYWU0ZjI0YTQ1MjFiMzdlZTIwNDBjYWQzYznoBVG8: --dhchap-ctrl-secret DHHC-1:02:OTBiYzU5ZDljZDkxY2Q0MDFjZDU4OGY0ZDBlMTY0YzU5YzM5NzkxNmVkMzkzNzk4tQX+Ug==: 00:15:32.560 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.819 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:32.819 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.819 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.819 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.819 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:32.819 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:32.819 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:32.819 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:15:32.819 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:32.819 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:32.819 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:32.819 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:32.819 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.819 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.819 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.819 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.819 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.819 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.819 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.819 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:33.391 00:15:33.391 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:33.391 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:33.391 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.391 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.391 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.391 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.391 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.391 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.391 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:33.391 { 00:15:33.391 "cntlid": 37, 00:15:33.391 "qid": 0, 00:15:33.391 "state": "enabled", 00:15:33.391 "thread": "nvmf_tgt_poll_group_000", 00:15:33.391 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:33.391 "listen_address": { 00:15:33.391 "trtype": "TCP", 00:15:33.391 "adrfam": "IPv4", 00:15:33.391 "traddr": "10.0.0.2", 00:15:33.391 "trsvcid": "4420" 00:15:33.391 }, 00:15:33.391 "peer_address": { 00:15:33.391 "trtype": "TCP", 00:15:33.391 "adrfam": "IPv4", 00:15:33.391 "traddr": "10.0.0.1", 00:15:33.391 "trsvcid": "55258" 00:15:33.391 }, 00:15:33.391 "auth": { 00:15:33.391 "state": "completed", 00:15:33.391 "digest": "sha256", 00:15:33.391 "dhgroup": "ffdhe6144" 00:15:33.391 } 00:15:33.391 } 00:15:33.391 ]' 00:15:33.391 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:33.650 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:33.650 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:33.650 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:33.650 07:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:33.650 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.650 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.650 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.909 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTU2NGU3ZmU2MzEwYzMzMjBiZTFkMTQ3YWE4NGRjMDFmYjI0MmQxNGIzNzY1NDJl8zg9sg==: --dhchap-ctrl-secret DHHC-1:01:MGZjZTY1ZTQ1MjYwODZkNmRlMjc0NjRiZGU4MzllNWGLuOVs: 00:15:33.909 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTU2NGU3ZmU2MzEwYzMzMjBiZTFkMTQ3YWE4NGRjMDFmYjI0MmQxNGIzNzY1NDJl8zg9sg==: --dhchap-ctrl-secret DHHC-1:01:MGZjZTY1ZTQ1MjYwODZkNmRlMjc0NjRiZGU4MzllNWGLuOVs: 00:15:34.477 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.477 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.477 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:34.477 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.477 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.477 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.477 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:34.477 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:34.477 07:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:34.477 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:15:34.477 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:34.477 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:34.477 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:34.477 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:34.477 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.477 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:34.477 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.477 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.736 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.736 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:34.736 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:34.736 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:34.994 00:15:34.994 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:34.994 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:34.994 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.268 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.268 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.268 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.268 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.268 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.268 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:35.268 { 00:15:35.268 "cntlid": 39, 00:15:35.268 "qid": 0, 00:15:35.268 "state": "enabled", 00:15:35.268 "thread": "nvmf_tgt_poll_group_000", 00:15:35.268 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:35.268 "listen_address": { 00:15:35.268 "trtype": "TCP", 00:15:35.268 "adrfam": "IPv4", 00:15:35.268 "traddr": "10.0.0.2", 00:15:35.268 "trsvcid": "4420" 00:15:35.268 }, 00:15:35.268 "peer_address": { 00:15:35.268 "trtype": "TCP", 00:15:35.268 "adrfam": "IPv4", 00:15:35.268 "traddr": "10.0.0.1", 00:15:35.268 "trsvcid": "55274" 00:15:35.268 }, 00:15:35.268 "auth": { 00:15:35.268 "state": "completed", 00:15:35.268 "digest": "sha256", 00:15:35.268 "dhgroup": "ffdhe6144" 00:15:35.268 } 00:15:35.268 } 00:15:35.268 ]' 00:15:35.268 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:35.268 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:35.268 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:35.268 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:35.268 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:35.268 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.268 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.268 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.527 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGM4MmQ3NGU2MzQ2YmQ2NWNlY2YyY2MxM2RlOTRmNzE1OTZlYzAxYzFiNzg2NDNiNjgwNjRkYzI0NmVhZjRkOULdqeA=: 00:15:35.527 07:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGM4MmQ3NGU2MzQ2YmQ2NWNlY2YyY2MxM2RlOTRmNzE1OTZlYzAxYzFiNzg2NDNiNjgwNjRkYzI0NmVhZjRkOULdqeA=: 00:15:36.095 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.095 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:36.095 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.095 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.095 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.095 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:36.095 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:36.095 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:36.095 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:36.353 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:15:36.353 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:36.353 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:36.353 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:36.353 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:36.353 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.353 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.353 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.353 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.353 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.353 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.353 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.353 07:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.920 00:15:36.920 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:36.920 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:36.920 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.920 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.920 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.920 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.920 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.920 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.920 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:36.920 { 00:15:36.920 "cntlid": 41, 00:15:36.920 "qid": 0, 00:15:36.920 "state": "enabled", 00:15:36.920 "thread": "nvmf_tgt_poll_group_000", 00:15:36.920 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:36.920 "listen_address": { 00:15:36.920 "trtype": "TCP", 00:15:36.920 "adrfam": "IPv4", 00:15:36.920 "traddr": "10.0.0.2", 00:15:36.920 "trsvcid": "4420" 00:15:36.920 }, 00:15:36.920 "peer_address": { 00:15:36.920 "trtype": "TCP", 00:15:36.920 "adrfam": "IPv4", 00:15:36.920 "traddr": "10.0.0.1", 00:15:36.920 "trsvcid": "55302" 00:15:36.920 }, 00:15:36.920 "auth": { 00:15:36.920 "state": "completed", 00:15:36.920 "digest": "sha256", 00:15:36.920 "dhgroup": "ffdhe8192" 00:15:36.920 } 00:15:36.920 } 00:15:36.920 ]' 00:15:36.920 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:37.178 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:37.178 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:37.178 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:37.178 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:37.178 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.178 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.178 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.437 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzBlYzBkNmMzN2RlYTJkNDgzMjUyYzU1ZDA0NDFjZTY5MjhlNmU4N2VhZmRhYTZjlv9xVQ==: --dhchap-ctrl-secret DHHC-1:03:ODkzZTg5ODQ0MzllMDlkOWZhNmM2ZDgwNjlkYWVmNGQ5ZjIzZWMxYmZiYmI0YmIxYjI5NGRjOWQyNjQzMDVlMmvFeQQ=: 00:15:37.437 07:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzBlYzBkNmMzN2RlYTJkNDgzMjUyYzU1ZDA0NDFjZTY5MjhlNmU4N2VhZmRhYTZjlv9xVQ==: --dhchap-ctrl-secret DHHC-1:03:ODkzZTg5ODQ0MzllMDlkOWZhNmM2ZDgwNjlkYWVmNGQ5ZjIzZWMxYmZiYmI0YmIxYjI5NGRjOWQyNjQzMDVlMmvFeQQ=: 00:15:38.004 07:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.004 07:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:38.004 07:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.004 07:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.004 07:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.004 07:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:38.004 07:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:38.004 07:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:38.263 07:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:15:38.263 07:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:38.263 07:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:38.263 07:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:38.263 07:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:38.263 07:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.263 07:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:38.263 07:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.263 07:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.263 07:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.263 07:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:38.263 07:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:38.263 07:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:38.521 00:15:38.781 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:38.781 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:38.781 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.781 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.781 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.781 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.781 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.781 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.781 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:38.781 { 00:15:38.781 "cntlid": 43, 00:15:38.781 "qid": 0, 00:15:38.781 "state": "enabled", 00:15:38.781 "thread": "nvmf_tgt_poll_group_000", 00:15:38.781 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:38.781 "listen_address": { 00:15:38.781 "trtype": "TCP", 00:15:38.781 "adrfam": "IPv4", 00:15:38.781 "traddr": "10.0.0.2", 00:15:38.781 "trsvcid": "4420" 00:15:38.781 }, 00:15:38.781 "peer_address": { 00:15:38.781 "trtype": "TCP", 00:15:38.781 "adrfam": "IPv4", 00:15:38.781 "traddr": "10.0.0.1", 00:15:38.781 "trsvcid": "55340" 00:15:38.781 }, 00:15:38.781 "auth": { 00:15:38.781 "state": "completed", 00:15:38.781 "digest": "sha256", 00:15:38.781 "dhgroup": "ffdhe8192" 00:15:38.781 } 00:15:38.781 } 00:15:38.781 ]' 00:15:38.781 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:38.781 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:38.781 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:39.041 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:39.041 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:39.041 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.041 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.041 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.041 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDBkMWVlYWU0ZjI0YTQ1MjFiMzdlZTIwNDBjYWQzYznoBVG8: --dhchap-ctrl-secret DHHC-1:02:OTBiYzU5ZDljZDkxY2Q0MDFjZDU4OGY0ZDBlMTY0YzU5YzM5NzkxNmVkMzkzNzk4tQX+Ug==: 00:15:39.041 07:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDBkMWVlYWU0ZjI0YTQ1MjFiMzdlZTIwNDBjYWQzYznoBVG8: --dhchap-ctrl-secret DHHC-1:02:OTBiYzU5ZDljZDkxY2Q0MDFjZDU4OGY0ZDBlMTY0YzU5YzM5NzkxNmVkMzkzNzk4tQX+Ug==: 00:15:39.608 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.608 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:39.608 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.608 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.867 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.867 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:39.867 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:39.867 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:39.867 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:15:39.867 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:39.867 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:39.867 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:39.867 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:39.867 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.867 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.867 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.867 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.867 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.867 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.867 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.867 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:40.435 00:15:40.435 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:40.435 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:40.435 07:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.694 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.694 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.694 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.694 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.694 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.694 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:40.694 { 00:15:40.694 "cntlid": 45, 00:15:40.694 "qid": 0, 00:15:40.694 "state": "enabled", 00:15:40.694 "thread": "nvmf_tgt_poll_group_000", 00:15:40.694 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:40.694 "listen_address": { 00:15:40.694 "trtype": "TCP", 00:15:40.694 "adrfam": "IPv4", 00:15:40.694 "traddr": "10.0.0.2", 00:15:40.694 "trsvcid": "4420" 00:15:40.694 }, 00:15:40.694 "peer_address": { 00:15:40.694 "trtype": "TCP", 00:15:40.694 "adrfam": "IPv4", 00:15:40.694 "traddr": "10.0.0.1", 00:15:40.694 "trsvcid": "41948" 00:15:40.694 }, 00:15:40.694 "auth": { 00:15:40.694 "state": "completed", 00:15:40.694 "digest": "sha256", 00:15:40.694 "dhgroup": "ffdhe8192" 00:15:40.694 } 00:15:40.694 } 00:15:40.694 ]' 00:15:40.694 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:40.694 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:40.694 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:40.694 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:40.694 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:40.694 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.694 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.694 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.953 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTU2NGU3ZmU2MzEwYzMzMjBiZTFkMTQ3YWE4NGRjMDFmYjI0MmQxNGIzNzY1NDJl8zg9sg==: --dhchap-ctrl-secret DHHC-1:01:MGZjZTY1ZTQ1MjYwODZkNmRlMjc0NjRiZGU4MzllNWGLuOVs: 00:15:40.953 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTU2NGU3ZmU2MzEwYzMzMjBiZTFkMTQ3YWE4NGRjMDFmYjI0MmQxNGIzNzY1NDJl8zg9sg==: --dhchap-ctrl-secret DHHC-1:01:MGZjZTY1ZTQ1MjYwODZkNmRlMjc0NjRiZGU4MzllNWGLuOVs: 00:15:41.520 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.520 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:41.520 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.520 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.520 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.520 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:41.520 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:41.520 07:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:41.779 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:15:41.779 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:41.779 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:41.779 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:41.779 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:41.779 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.779 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:41.779 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.779 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.779 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.779 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:41.779 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:41.779 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:42.346 00:15:42.346 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:42.346 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:42.346 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.605 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.605 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.605 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.605 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.605 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.605 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:42.605 { 00:15:42.605 "cntlid": 47, 00:15:42.605 "qid": 0, 00:15:42.605 "state": "enabled", 00:15:42.605 "thread": "nvmf_tgt_poll_group_000", 00:15:42.605 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:42.605 "listen_address": { 00:15:42.605 "trtype": "TCP", 00:15:42.605 "adrfam": "IPv4", 00:15:42.605 "traddr": "10.0.0.2", 00:15:42.605 "trsvcid": "4420" 00:15:42.605 }, 00:15:42.605 "peer_address": { 00:15:42.605 "trtype": "TCP", 00:15:42.605 "adrfam": "IPv4", 00:15:42.605 "traddr": "10.0.0.1", 00:15:42.605 "trsvcid": "41972" 00:15:42.605 }, 00:15:42.605 "auth": { 00:15:42.605 "state": "completed", 00:15:42.605 "digest": "sha256", 00:15:42.605 "dhgroup": "ffdhe8192" 00:15:42.605 } 00:15:42.605 } 00:15:42.605 ]' 00:15:42.605 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:42.605 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:42.605 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.605 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:42.605 07:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.605 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.605 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.605 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.864 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGM4MmQ3NGU2MzQ2YmQ2NWNlY2YyY2MxM2RlOTRmNzE1OTZlYzAxYzFiNzg2NDNiNjgwNjRkYzI0NmVhZjRkOULdqeA=: 00:15:42.864 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGM4MmQ3NGU2MzQ2YmQ2NWNlY2YyY2MxM2RlOTRmNzE1OTZlYzAxYzFiNzg2NDNiNjgwNjRkYzI0NmVhZjRkOULdqeA=: 00:15:43.432 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.432 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:43.432 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.432 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.432 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.432 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:43.432 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:43.432 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.432 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:43.432 07:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:43.691 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:15:43.691 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:43.691 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:43.691 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:43.691 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:43.691 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.691 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.691 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.691 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.691 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.691 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.691 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.691 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.949 00:15:43.949 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:43.949 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:43.949 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.949 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.949 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.949 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.949 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.208 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.208 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.208 { 00:15:44.208 "cntlid": 49, 00:15:44.208 "qid": 0, 00:15:44.208 "state": "enabled", 00:15:44.208 "thread": "nvmf_tgt_poll_group_000", 00:15:44.208 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:44.208 "listen_address": { 00:15:44.208 "trtype": "TCP", 00:15:44.208 "adrfam": "IPv4", 00:15:44.208 "traddr": "10.0.0.2", 00:15:44.208 "trsvcid": "4420" 00:15:44.208 }, 00:15:44.208 "peer_address": { 00:15:44.208 "trtype": "TCP", 00:15:44.208 "adrfam": "IPv4", 00:15:44.208 "traddr": "10.0.0.1", 00:15:44.208 "trsvcid": "42002" 00:15:44.208 }, 00:15:44.208 "auth": { 00:15:44.208 "state": "completed", 00:15:44.208 "digest": "sha384", 00:15:44.208 "dhgroup": "null" 00:15:44.208 } 00:15:44.208 } 00:15:44.208 ]' 00:15:44.208 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.208 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:44.208 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.208 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:44.208 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.208 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.208 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.208 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.466 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzBlYzBkNmMzN2RlYTJkNDgzMjUyYzU1ZDA0NDFjZTY5MjhlNmU4N2VhZmRhYTZjlv9xVQ==: --dhchap-ctrl-secret DHHC-1:03:ODkzZTg5ODQ0MzllMDlkOWZhNmM2ZDgwNjlkYWVmNGQ5ZjIzZWMxYmZiYmI0YmIxYjI5NGRjOWQyNjQzMDVlMmvFeQQ=: 00:15:44.466 07:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzBlYzBkNmMzN2RlYTJkNDgzMjUyYzU1ZDA0NDFjZTY5MjhlNmU4N2VhZmRhYTZjlv9xVQ==: --dhchap-ctrl-secret DHHC-1:03:ODkzZTg5ODQ0MzllMDlkOWZhNmM2ZDgwNjlkYWVmNGQ5ZjIzZWMxYmZiYmI0YmIxYjI5NGRjOWQyNjQzMDVlMmvFeQQ=: 00:15:45.034 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.034 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:45.034 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.034 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.034 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.034 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:45.034 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:45.034 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:45.292 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:15:45.292 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:45.292 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:45.292 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:45.292 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:45.292 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.292 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.292 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.292 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.292 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.292 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.292 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.292 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.550 00:15:45.550 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:45.550 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:45.550 07:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.809 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.809 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.809 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.809 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.809 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.809 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:45.809 { 00:15:45.809 "cntlid": 51, 00:15:45.809 "qid": 0, 00:15:45.809 "state": "enabled", 00:15:45.809 "thread": "nvmf_tgt_poll_group_000", 00:15:45.809 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:45.809 "listen_address": { 00:15:45.809 "trtype": "TCP", 00:15:45.809 "adrfam": "IPv4", 00:15:45.809 "traddr": "10.0.0.2", 00:15:45.809 "trsvcid": "4420" 00:15:45.809 }, 00:15:45.809 "peer_address": { 00:15:45.809 "trtype": "TCP", 00:15:45.809 "adrfam": "IPv4", 00:15:45.809 "traddr": "10.0.0.1", 00:15:45.809 "trsvcid": "42044" 00:15:45.809 }, 00:15:45.809 "auth": { 00:15:45.809 "state": "completed", 00:15:45.809 "digest": "sha384", 00:15:45.809 "dhgroup": "null" 00:15:45.809 } 00:15:45.809 } 00:15:45.809 ]' 00:15:45.809 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:45.809 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:45.809 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:45.809 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:45.809 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:45.809 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.809 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.809 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.068 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDBkMWVlYWU0ZjI0YTQ1MjFiMzdlZTIwNDBjYWQzYznoBVG8: --dhchap-ctrl-secret DHHC-1:02:OTBiYzU5ZDljZDkxY2Q0MDFjZDU4OGY0ZDBlMTY0YzU5YzM5NzkxNmVkMzkzNzk4tQX+Ug==: 00:15:46.068 07:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDBkMWVlYWU0ZjI0YTQ1MjFiMzdlZTIwNDBjYWQzYznoBVG8: --dhchap-ctrl-secret DHHC-1:02:OTBiYzU5ZDljZDkxY2Q0MDFjZDU4OGY0ZDBlMTY0YzU5YzM5NzkxNmVkMzkzNzk4tQX+Ug==: 00:15:46.635 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.635 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:46.635 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.635 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.635 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.635 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.635 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:46.635 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:46.893 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:15:46.893 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.893 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:46.893 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:46.893 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:46.893 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.893 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.893 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.893 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.893 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.893 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.893 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.893 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.152 00:15:47.152 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.152 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.152 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.152 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.152 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.411 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.411 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.411 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.411 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.411 { 00:15:47.411 "cntlid": 53, 00:15:47.411 "qid": 0, 00:15:47.411 "state": "enabled", 00:15:47.411 "thread": "nvmf_tgt_poll_group_000", 00:15:47.411 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:47.411 "listen_address": { 00:15:47.411 "trtype": "TCP", 00:15:47.411 "adrfam": "IPv4", 00:15:47.411 "traddr": "10.0.0.2", 00:15:47.411 "trsvcid": "4420" 00:15:47.411 }, 00:15:47.411 "peer_address": { 00:15:47.411 "trtype": "TCP", 00:15:47.411 "adrfam": "IPv4", 00:15:47.411 "traddr": "10.0.0.1", 00:15:47.411 "trsvcid": "42074" 00:15:47.411 }, 00:15:47.411 "auth": { 00:15:47.411 "state": "completed", 00:15:47.411 "digest": "sha384", 00:15:47.411 "dhgroup": "null" 00:15:47.411 } 00:15:47.411 } 00:15:47.411 ]' 00:15:47.411 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.411 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:47.411 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.411 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:47.411 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.411 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.411 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.411 07:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.669 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTU2NGU3ZmU2MzEwYzMzMjBiZTFkMTQ3YWE4NGRjMDFmYjI0MmQxNGIzNzY1NDJl8zg9sg==: --dhchap-ctrl-secret DHHC-1:01:MGZjZTY1ZTQ1MjYwODZkNmRlMjc0NjRiZGU4MzllNWGLuOVs: 00:15:47.669 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTU2NGU3ZmU2MzEwYzMzMjBiZTFkMTQ3YWE4NGRjMDFmYjI0MmQxNGIzNzY1NDJl8zg9sg==: --dhchap-ctrl-secret DHHC-1:01:MGZjZTY1ZTQ1MjYwODZkNmRlMjc0NjRiZGU4MzllNWGLuOVs: 00:15:48.236 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.236 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:48.236 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.236 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.236 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.236 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.236 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:48.236 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:48.495 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:15:48.495 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.495 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:48.495 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:48.495 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:48.495 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.495 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:48.495 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.495 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.495 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.495 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:48.495 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:48.495 07:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:48.754 00:15:48.754 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:48.754 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.754 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:48.754 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.754 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.754 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.754 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.754 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.754 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:48.754 { 00:15:48.754 "cntlid": 55, 00:15:48.754 "qid": 0, 00:15:48.754 "state": "enabled", 00:15:48.754 "thread": "nvmf_tgt_poll_group_000", 00:15:48.754 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:48.754 "listen_address": { 00:15:48.754 "trtype": "TCP", 00:15:48.754 "adrfam": "IPv4", 00:15:48.754 "traddr": "10.0.0.2", 00:15:48.754 "trsvcid": "4420" 00:15:48.754 }, 00:15:48.754 "peer_address": { 00:15:48.754 "trtype": "TCP", 00:15:48.754 "adrfam": "IPv4", 00:15:48.754 "traddr": "10.0.0.1", 00:15:48.754 "trsvcid": "42100" 00:15:48.754 }, 00:15:48.754 "auth": { 00:15:48.754 "state": "completed", 00:15:48.754 "digest": "sha384", 00:15:48.754 "dhgroup": "null" 00:15:48.754 } 00:15:48.754 } 00:15:48.754 ]' 00:15:48.754 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.013 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:49.013 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.013 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:49.013 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.013 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.013 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.013 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.271 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGM4MmQ3NGU2MzQ2YmQ2NWNlY2YyY2MxM2RlOTRmNzE1OTZlYzAxYzFiNzg2NDNiNjgwNjRkYzI0NmVhZjRkOULdqeA=: 00:15:49.271 07:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGM4MmQ3NGU2MzQ2YmQ2NWNlY2YyY2MxM2RlOTRmNzE1OTZlYzAxYzFiNzg2NDNiNjgwNjRkYzI0NmVhZjRkOULdqeA=: 00:15:49.838 07:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.838 07:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:49.838 07:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.838 07:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.838 07:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.838 07:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:49.838 07:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:49.838 07:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:49.838 07:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:49.839 07:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:15:49.839 07:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:49.839 07:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:49.839 07:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:49.839 07:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:49.839 07:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.839 07:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.839 07:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.839 07:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.097 07:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.097 07:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.097 07:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.097 07:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.097 00:15:50.356 07:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:50.356 07:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:50.356 07:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.356 07:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.356 07:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.356 07:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.356 07:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.356 07:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.356 07:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:50.356 { 00:15:50.356 "cntlid": 57, 00:15:50.356 "qid": 0, 00:15:50.356 "state": "enabled", 00:15:50.356 "thread": "nvmf_tgt_poll_group_000", 00:15:50.356 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:50.356 "listen_address": { 00:15:50.356 "trtype": "TCP", 00:15:50.356 "adrfam": "IPv4", 00:15:50.356 "traddr": "10.0.0.2", 00:15:50.356 "trsvcid": "4420" 00:15:50.356 }, 00:15:50.356 "peer_address": { 00:15:50.356 "trtype": "TCP", 00:15:50.356 "adrfam": "IPv4", 00:15:50.356 "traddr": "10.0.0.1", 00:15:50.356 "trsvcid": "43590" 00:15:50.356 }, 00:15:50.356 "auth": { 00:15:50.356 "state": "completed", 00:15:50.356 "digest": "sha384", 00:15:50.356 "dhgroup": "ffdhe2048" 00:15:50.356 } 00:15:50.356 } 00:15:50.356 ]' 00:15:50.356 07:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:50.356 07:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:50.615 07:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:50.615 07:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:50.615 07:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:50.615 07:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.615 07:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.615 07:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.873 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzBlYzBkNmMzN2RlYTJkNDgzMjUyYzU1ZDA0NDFjZTY5MjhlNmU4N2VhZmRhYTZjlv9xVQ==: --dhchap-ctrl-secret DHHC-1:03:ODkzZTg5ODQ0MzllMDlkOWZhNmM2ZDgwNjlkYWVmNGQ5ZjIzZWMxYmZiYmI0YmIxYjI5NGRjOWQyNjQzMDVlMmvFeQQ=: 00:15:50.873 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzBlYzBkNmMzN2RlYTJkNDgzMjUyYzU1ZDA0NDFjZTY5MjhlNmU4N2VhZmRhYTZjlv9xVQ==: --dhchap-ctrl-secret DHHC-1:03:ODkzZTg5ODQ0MzllMDlkOWZhNmM2ZDgwNjlkYWVmNGQ5ZjIzZWMxYmZiYmI0YmIxYjI5NGRjOWQyNjQzMDVlMmvFeQQ=: 00:15:51.441 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.441 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.441 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:51.441 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.441 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.441 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.441 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:51.441 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:51.441 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:51.441 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:15:51.441 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:51.441 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:51.441 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:51.441 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:51.441 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.441 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.441 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.441 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.441 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.441 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.441 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.441 07:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.700 00:15:51.700 07:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.700 07:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.700 07:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.959 07:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.959 07:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.959 07:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.959 07:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.959 07:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.959 07:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.959 { 00:15:51.959 "cntlid": 59, 00:15:51.959 "qid": 0, 00:15:51.959 "state": "enabled", 00:15:51.959 "thread": "nvmf_tgt_poll_group_000", 00:15:51.959 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:51.959 "listen_address": { 00:15:51.959 "trtype": "TCP", 00:15:51.959 "adrfam": "IPv4", 00:15:51.959 "traddr": "10.0.0.2", 00:15:51.959 "trsvcid": "4420" 00:15:51.959 }, 00:15:51.959 "peer_address": { 00:15:51.959 "trtype": "TCP", 00:15:51.959 "adrfam": "IPv4", 00:15:51.959 "traddr": "10.0.0.1", 00:15:51.959 "trsvcid": "43608" 00:15:51.959 }, 00:15:51.959 "auth": { 00:15:51.959 "state": "completed", 00:15:51.959 "digest": "sha384", 00:15:51.959 "dhgroup": "ffdhe2048" 00:15:51.959 } 00:15:51.959 } 00:15:51.959 ]' 00:15:51.959 07:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.959 07:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:51.959 07:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:52.217 07:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:52.217 07:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.217 07:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.217 07:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.217 07:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.476 07:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDBkMWVlYWU0ZjI0YTQ1MjFiMzdlZTIwNDBjYWQzYznoBVG8: --dhchap-ctrl-secret DHHC-1:02:OTBiYzU5ZDljZDkxY2Q0MDFjZDU4OGY0ZDBlMTY0YzU5YzM5NzkxNmVkMzkzNzk4tQX+Ug==: 00:15:52.476 07:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDBkMWVlYWU0ZjI0YTQ1MjFiMzdlZTIwNDBjYWQzYznoBVG8: --dhchap-ctrl-secret DHHC-1:02:OTBiYzU5ZDljZDkxY2Q0MDFjZDU4OGY0ZDBlMTY0YzU5YzM5NzkxNmVkMzkzNzk4tQX+Ug==: 00:15:53.044 07:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.044 07:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:53.044 07:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.044 07:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.044 07:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.044 07:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.044 07:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:53.044 07:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:53.044 07:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:15:53.044 07:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.044 07:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:53.044 07:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:53.044 07:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:53.044 07:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.044 07:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.044 07:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.044 07:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.044 07:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.044 07:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.044 07:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.044 07:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.302 00:15:53.302 07:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.302 07:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.302 07:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.561 07:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.561 07:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.561 07:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.561 07:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.561 07:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.561 07:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.562 { 00:15:53.562 "cntlid": 61, 00:15:53.562 "qid": 0, 00:15:53.562 "state": "enabled", 00:15:53.562 "thread": "nvmf_tgt_poll_group_000", 00:15:53.562 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:53.562 "listen_address": { 00:15:53.562 "trtype": "TCP", 00:15:53.562 "adrfam": "IPv4", 00:15:53.562 "traddr": "10.0.0.2", 00:15:53.562 "trsvcid": "4420" 00:15:53.562 }, 00:15:53.562 "peer_address": { 00:15:53.562 "trtype": "TCP", 00:15:53.562 "adrfam": "IPv4", 00:15:53.562 "traddr": "10.0.0.1", 00:15:53.562 "trsvcid": "43626" 00:15:53.562 }, 00:15:53.562 "auth": { 00:15:53.562 "state": "completed", 00:15:53.562 "digest": "sha384", 00:15:53.562 "dhgroup": "ffdhe2048" 00:15:53.562 } 00:15:53.562 } 00:15:53.562 ]' 00:15:53.562 07:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.562 07:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:53.562 07:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:53.822 07:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:53.822 07:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:53.822 07:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.822 07:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.822 07:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.822 07:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTU2NGU3ZmU2MzEwYzMzMjBiZTFkMTQ3YWE4NGRjMDFmYjI0MmQxNGIzNzY1NDJl8zg9sg==: --dhchap-ctrl-secret DHHC-1:01:MGZjZTY1ZTQ1MjYwODZkNmRlMjc0NjRiZGU4MzllNWGLuOVs: 00:15:53.822 07:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTU2NGU3ZmU2MzEwYzMzMjBiZTFkMTQ3YWE4NGRjMDFmYjI0MmQxNGIzNzY1NDJl8zg9sg==: --dhchap-ctrl-secret DHHC-1:01:MGZjZTY1ZTQ1MjYwODZkNmRlMjc0NjRiZGU4MzllNWGLuOVs: 00:15:54.389 07:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.648 07:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:54.648 07:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.648 07:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.648 07:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.648 07:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:54.648 07:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:54.648 07:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:54.648 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:15:54.648 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:54.648 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:54.648 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:54.648 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:54.648 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.648 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:54.648 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.648 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.648 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.648 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:54.648 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:54.648 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:54.906 00:15:54.906 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:54.906 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:54.906 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.164 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.164 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.164 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.164 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.164 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.164 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:55.164 { 00:15:55.164 "cntlid": 63, 00:15:55.164 "qid": 0, 00:15:55.164 "state": "enabled", 00:15:55.164 "thread": "nvmf_tgt_poll_group_000", 00:15:55.164 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:55.164 "listen_address": { 00:15:55.164 "trtype": "TCP", 00:15:55.164 "adrfam": "IPv4", 00:15:55.164 "traddr": "10.0.0.2", 00:15:55.164 "trsvcid": "4420" 00:15:55.164 }, 00:15:55.164 "peer_address": { 00:15:55.164 "trtype": "TCP", 00:15:55.164 "adrfam": "IPv4", 00:15:55.164 "traddr": "10.0.0.1", 00:15:55.164 "trsvcid": "43662" 00:15:55.164 }, 00:15:55.164 "auth": { 00:15:55.164 "state": "completed", 00:15:55.164 "digest": "sha384", 00:15:55.164 "dhgroup": "ffdhe2048" 00:15:55.164 } 00:15:55.164 } 00:15:55.164 ]' 00:15:55.164 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:55.164 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:55.164 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:55.423 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:55.423 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:55.423 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.423 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.423 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.423 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGM4MmQ3NGU2MzQ2YmQ2NWNlY2YyY2MxM2RlOTRmNzE1OTZlYzAxYzFiNzg2NDNiNjgwNjRkYzI0NmVhZjRkOULdqeA=: 00:15:55.423 07:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGM4MmQ3NGU2MzQ2YmQ2NWNlY2YyY2MxM2RlOTRmNzE1OTZlYzAxYzFiNzg2NDNiNjgwNjRkYzI0NmVhZjRkOULdqeA=: 00:15:55.990 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.990 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:55.990 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.990 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.249 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.249 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:56.249 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:56.249 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:56.249 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:56.249 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:15:56.249 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:56.249 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:56.249 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:56.249 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:56.249 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.249 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.249 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.249 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.249 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.249 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.249 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.249 07:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.508 00:15:56.508 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:56.508 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:56.508 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.766 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.766 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.766 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.766 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.766 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.766 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:56.766 { 00:15:56.766 "cntlid": 65, 00:15:56.766 "qid": 0, 00:15:56.766 "state": "enabled", 00:15:56.766 "thread": "nvmf_tgt_poll_group_000", 00:15:56.766 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:56.766 "listen_address": { 00:15:56.766 "trtype": "TCP", 00:15:56.766 "adrfam": "IPv4", 00:15:56.766 "traddr": "10.0.0.2", 00:15:56.766 "trsvcid": "4420" 00:15:56.766 }, 00:15:56.766 "peer_address": { 00:15:56.766 "trtype": "TCP", 00:15:56.766 "adrfam": "IPv4", 00:15:56.766 "traddr": "10.0.0.1", 00:15:56.766 "trsvcid": "43686" 00:15:56.766 }, 00:15:56.766 "auth": { 00:15:56.766 "state": "completed", 00:15:56.766 "digest": "sha384", 00:15:56.766 "dhgroup": "ffdhe3072" 00:15:56.766 } 00:15:56.766 } 00:15:56.766 ]' 00:15:56.766 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:56.766 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:56.766 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:57.024 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:57.024 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:57.024 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.024 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.024 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.024 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzBlYzBkNmMzN2RlYTJkNDgzMjUyYzU1ZDA0NDFjZTY5MjhlNmU4N2VhZmRhYTZjlv9xVQ==: --dhchap-ctrl-secret DHHC-1:03:ODkzZTg5ODQ0MzllMDlkOWZhNmM2ZDgwNjlkYWVmNGQ5ZjIzZWMxYmZiYmI0YmIxYjI5NGRjOWQyNjQzMDVlMmvFeQQ=: 00:15:57.024 07:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzBlYzBkNmMzN2RlYTJkNDgzMjUyYzU1ZDA0NDFjZTY5MjhlNmU4N2VhZmRhYTZjlv9xVQ==: --dhchap-ctrl-secret DHHC-1:03:ODkzZTg5ODQ0MzllMDlkOWZhNmM2ZDgwNjlkYWVmNGQ5ZjIzZWMxYmZiYmI0YmIxYjI5NGRjOWQyNjQzMDVlMmvFeQQ=: 00:15:57.591 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.591 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.591 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:57.591 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.591 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.591 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.591 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.591 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:57.591 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:57.850 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:15:57.850 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:57.850 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:57.850 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:57.850 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:57.850 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.850 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.850 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.850 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.850 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.850 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.850 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.850 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.131 00:15:58.131 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:58.131 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.131 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.462 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.463 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.463 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.463 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.463 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.463 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.463 { 00:15:58.463 "cntlid": 67, 00:15:58.463 "qid": 0, 00:15:58.463 "state": "enabled", 00:15:58.463 "thread": "nvmf_tgt_poll_group_000", 00:15:58.463 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:58.463 "listen_address": { 00:15:58.463 "trtype": "TCP", 00:15:58.463 "adrfam": "IPv4", 00:15:58.463 "traddr": "10.0.0.2", 00:15:58.463 "trsvcid": "4420" 00:15:58.463 }, 00:15:58.463 "peer_address": { 00:15:58.463 "trtype": "TCP", 00:15:58.463 "adrfam": "IPv4", 00:15:58.463 "traddr": "10.0.0.1", 00:15:58.463 "trsvcid": "43710" 00:15:58.463 }, 00:15:58.463 "auth": { 00:15:58.463 "state": "completed", 00:15:58.463 "digest": "sha384", 00:15:58.463 "dhgroup": "ffdhe3072" 00:15:58.463 } 00:15:58.463 } 00:15:58.463 ]' 00:15:58.463 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.463 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:58.463 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.463 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:58.463 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.463 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.463 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.463 07:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.821 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDBkMWVlYWU0ZjI0YTQ1MjFiMzdlZTIwNDBjYWQzYznoBVG8: --dhchap-ctrl-secret DHHC-1:02:OTBiYzU5ZDljZDkxY2Q0MDFjZDU4OGY0ZDBlMTY0YzU5YzM5NzkxNmVkMzkzNzk4tQX+Ug==: 00:15:58.821 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDBkMWVlYWU0ZjI0YTQ1MjFiMzdlZTIwNDBjYWQzYznoBVG8: --dhchap-ctrl-secret DHHC-1:02:OTBiYzU5ZDljZDkxY2Q0MDFjZDU4OGY0ZDBlMTY0YzU5YzM5NzkxNmVkMzkzNzk4tQX+Ug==: 00:15:59.399 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.399 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:59.399 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.399 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.399 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.399 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:59.399 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:59.399 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:59.658 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:15:59.658 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:59.658 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:59.658 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:59.658 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:59.658 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.658 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.658 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.658 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.658 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.658 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.658 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.658 07:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.917 00:15:59.917 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:59.917 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:59.917 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.175 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.175 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.175 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.175 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.175 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.175 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.175 { 00:16:00.175 "cntlid": 69, 00:16:00.175 "qid": 0, 00:16:00.175 "state": "enabled", 00:16:00.175 "thread": "nvmf_tgt_poll_group_000", 00:16:00.175 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:00.175 "listen_address": { 00:16:00.175 "trtype": "TCP", 00:16:00.175 "adrfam": "IPv4", 00:16:00.175 "traddr": "10.0.0.2", 00:16:00.175 "trsvcid": "4420" 00:16:00.175 }, 00:16:00.175 "peer_address": { 00:16:00.175 "trtype": "TCP", 00:16:00.175 "adrfam": "IPv4", 00:16:00.175 "traddr": "10.0.0.1", 00:16:00.175 "trsvcid": "37236" 00:16:00.175 }, 00:16:00.175 "auth": { 00:16:00.175 "state": "completed", 00:16:00.175 "digest": "sha384", 00:16:00.175 "dhgroup": "ffdhe3072" 00:16:00.175 } 00:16:00.175 } 00:16:00.175 ]' 00:16:00.175 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:00.175 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:00.175 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:00.175 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:00.175 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:00.175 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.175 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.175 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.433 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTU2NGU3ZmU2MzEwYzMzMjBiZTFkMTQ3YWE4NGRjMDFmYjI0MmQxNGIzNzY1NDJl8zg9sg==: --dhchap-ctrl-secret DHHC-1:01:MGZjZTY1ZTQ1MjYwODZkNmRlMjc0NjRiZGU4MzllNWGLuOVs: 00:16:00.433 07:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTU2NGU3ZmU2MzEwYzMzMjBiZTFkMTQ3YWE4NGRjMDFmYjI0MmQxNGIzNzY1NDJl8zg9sg==: --dhchap-ctrl-secret DHHC-1:01:MGZjZTY1ZTQ1MjYwODZkNmRlMjc0NjRiZGU4MzllNWGLuOVs: 00:16:01.000 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.000 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:01.000 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.000 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.000 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.000 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:01.000 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:01.000 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:01.259 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:16:01.259 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:01.259 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:01.259 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:01.259 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:01.259 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.259 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:01.259 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.259 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.259 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.259 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:01.259 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:01.259 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:01.517 00:16:01.517 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:01.517 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:01.517 07:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.776 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.776 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.776 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.776 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.776 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.776 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.776 { 00:16:01.776 "cntlid": 71, 00:16:01.776 "qid": 0, 00:16:01.776 "state": "enabled", 00:16:01.776 "thread": "nvmf_tgt_poll_group_000", 00:16:01.776 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:01.776 "listen_address": { 00:16:01.776 "trtype": "TCP", 00:16:01.776 "adrfam": "IPv4", 00:16:01.776 "traddr": "10.0.0.2", 00:16:01.776 "trsvcid": "4420" 00:16:01.776 }, 00:16:01.776 "peer_address": { 00:16:01.776 "trtype": "TCP", 00:16:01.776 "adrfam": "IPv4", 00:16:01.776 "traddr": "10.0.0.1", 00:16:01.776 "trsvcid": "37262" 00:16:01.776 }, 00:16:01.776 "auth": { 00:16:01.776 "state": "completed", 00:16:01.776 "digest": "sha384", 00:16:01.776 "dhgroup": "ffdhe3072" 00:16:01.776 } 00:16:01.776 } 00:16:01.776 ]' 00:16:01.776 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.776 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:01.776 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.776 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:01.776 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.776 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.776 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.776 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.034 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGM4MmQ3NGU2MzQ2YmQ2NWNlY2YyY2MxM2RlOTRmNzE1OTZlYzAxYzFiNzg2NDNiNjgwNjRkYzI0NmVhZjRkOULdqeA=: 00:16:02.035 07:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGM4MmQ3NGU2MzQ2YmQ2NWNlY2YyY2MxM2RlOTRmNzE1OTZlYzAxYzFiNzg2NDNiNjgwNjRkYzI0NmVhZjRkOULdqeA=: 00:16:02.603 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.603 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:02.603 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.603 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.603 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.603 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:02.603 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.603 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:02.603 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:02.862 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:16:02.862 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.862 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:02.862 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:02.862 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:02.862 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.862 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.862 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.862 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.862 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.862 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.862 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.862 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.120 00:16:03.120 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.120 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.120 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.378 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.378 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.378 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.378 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.378 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.378 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.378 { 00:16:03.378 "cntlid": 73, 00:16:03.378 "qid": 0, 00:16:03.378 "state": "enabled", 00:16:03.378 "thread": "nvmf_tgt_poll_group_000", 00:16:03.378 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:03.378 "listen_address": { 00:16:03.378 "trtype": "TCP", 00:16:03.378 "adrfam": "IPv4", 00:16:03.378 "traddr": "10.0.0.2", 00:16:03.378 "trsvcid": "4420" 00:16:03.378 }, 00:16:03.378 "peer_address": { 00:16:03.378 "trtype": "TCP", 00:16:03.378 "adrfam": "IPv4", 00:16:03.378 "traddr": "10.0.0.1", 00:16:03.378 "trsvcid": "37296" 00:16:03.378 }, 00:16:03.378 "auth": { 00:16:03.378 "state": "completed", 00:16:03.378 "digest": "sha384", 00:16:03.378 "dhgroup": "ffdhe4096" 00:16:03.378 } 00:16:03.378 } 00:16:03.378 ]' 00:16:03.378 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.378 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:03.378 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.378 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:03.378 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.378 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.378 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.378 07:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.636 07:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzBlYzBkNmMzN2RlYTJkNDgzMjUyYzU1ZDA0NDFjZTY5MjhlNmU4N2VhZmRhYTZjlv9xVQ==: --dhchap-ctrl-secret DHHC-1:03:ODkzZTg5ODQ0MzllMDlkOWZhNmM2ZDgwNjlkYWVmNGQ5ZjIzZWMxYmZiYmI0YmIxYjI5NGRjOWQyNjQzMDVlMmvFeQQ=: 00:16:03.636 07:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzBlYzBkNmMzN2RlYTJkNDgzMjUyYzU1ZDA0NDFjZTY5MjhlNmU4N2VhZmRhYTZjlv9xVQ==: --dhchap-ctrl-secret DHHC-1:03:ODkzZTg5ODQ0MzllMDlkOWZhNmM2ZDgwNjlkYWVmNGQ5ZjIzZWMxYmZiYmI0YmIxYjI5NGRjOWQyNjQzMDVlMmvFeQQ=: 00:16:04.203 07:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.203 07:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:04.203 07:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.203 07:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.203 07:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.203 07:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.203 07:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:04.203 07:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:04.462 07:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:04.462 07:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.462 07:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:04.462 07:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:04.462 07:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:04.462 07:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.462 07:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.462 07:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.462 07:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.462 07:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.462 07:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.462 07:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.462 07:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.721 00:16:04.721 07:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:04.721 07:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:04.721 07:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.979 07:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.979 07:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.979 07:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.979 07:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.979 07:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.979 07:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:04.979 { 00:16:04.979 "cntlid": 75, 00:16:04.979 "qid": 0, 00:16:04.979 "state": "enabled", 00:16:04.979 "thread": "nvmf_tgt_poll_group_000", 00:16:04.979 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:04.979 "listen_address": { 00:16:04.979 "trtype": "TCP", 00:16:04.979 "adrfam": "IPv4", 00:16:04.979 "traddr": "10.0.0.2", 00:16:04.979 "trsvcid": "4420" 00:16:04.979 }, 00:16:04.979 "peer_address": { 00:16:04.979 "trtype": "TCP", 00:16:04.979 "adrfam": "IPv4", 00:16:04.979 "traddr": "10.0.0.1", 00:16:04.979 "trsvcid": "37334" 00:16:04.979 }, 00:16:04.979 "auth": { 00:16:04.979 "state": "completed", 00:16:04.979 "digest": "sha384", 00:16:04.979 "dhgroup": "ffdhe4096" 00:16:04.979 } 00:16:04.979 } 00:16:04.979 ]' 00:16:04.979 07:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:04.979 07:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:04.979 07:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:04.979 07:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:04.979 07:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.238 07:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.238 07:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.238 07:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.238 07:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDBkMWVlYWU0ZjI0YTQ1MjFiMzdlZTIwNDBjYWQzYznoBVG8: --dhchap-ctrl-secret DHHC-1:02:OTBiYzU5ZDljZDkxY2Q0MDFjZDU4OGY0ZDBlMTY0YzU5YzM5NzkxNmVkMzkzNzk4tQX+Ug==: 00:16:05.238 07:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDBkMWVlYWU0ZjI0YTQ1MjFiMzdlZTIwNDBjYWQzYznoBVG8: --dhchap-ctrl-secret DHHC-1:02:OTBiYzU5ZDljZDkxY2Q0MDFjZDU4OGY0ZDBlMTY0YzU5YzM5NzkxNmVkMzkzNzk4tQX+Ug==: 00:16:05.804 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.062 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:06.062 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.062 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.062 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.062 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.062 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:06.062 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:06.062 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:16:06.062 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.062 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:06.062 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:06.062 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:06.062 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.062 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.062 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.062 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.062 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.062 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.062 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.062 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.320 00:16:06.320 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:06.320 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:06.320 07:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.578 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.578 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.578 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.578 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.578 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.578 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:06.578 { 00:16:06.578 "cntlid": 77, 00:16:06.578 "qid": 0, 00:16:06.578 "state": "enabled", 00:16:06.578 "thread": "nvmf_tgt_poll_group_000", 00:16:06.578 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:06.578 "listen_address": { 00:16:06.578 "trtype": "TCP", 00:16:06.578 "adrfam": "IPv4", 00:16:06.578 "traddr": "10.0.0.2", 00:16:06.578 "trsvcid": "4420" 00:16:06.578 }, 00:16:06.578 "peer_address": { 00:16:06.578 "trtype": "TCP", 00:16:06.578 "adrfam": "IPv4", 00:16:06.578 "traddr": "10.0.0.1", 00:16:06.578 "trsvcid": "37346" 00:16:06.578 }, 00:16:06.578 "auth": { 00:16:06.578 "state": "completed", 00:16:06.578 "digest": "sha384", 00:16:06.578 "dhgroup": "ffdhe4096" 00:16:06.578 } 00:16:06.578 } 00:16:06.578 ]' 00:16:06.578 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:06.578 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:06.578 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:06.836 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:06.836 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:06.836 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.836 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.836 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.094 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTU2NGU3ZmU2MzEwYzMzMjBiZTFkMTQ3YWE4NGRjMDFmYjI0MmQxNGIzNzY1NDJl8zg9sg==: --dhchap-ctrl-secret DHHC-1:01:MGZjZTY1ZTQ1MjYwODZkNmRlMjc0NjRiZGU4MzllNWGLuOVs: 00:16:07.094 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTU2NGU3ZmU2MzEwYzMzMjBiZTFkMTQ3YWE4NGRjMDFmYjI0MmQxNGIzNzY1NDJl8zg9sg==: --dhchap-ctrl-secret DHHC-1:01:MGZjZTY1ZTQ1MjYwODZkNmRlMjc0NjRiZGU4MzllNWGLuOVs: 00:16:07.661 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.661 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.661 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:07.661 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.661 07:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.661 07:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.661 07:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:07.661 07:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:07.661 07:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:07.661 07:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:07.661 07:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:07.661 07:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:07.661 07:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:07.661 07:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:07.661 07:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.661 07:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:07.661 07:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.661 07:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.919 07:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.919 07:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:07.919 07:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:07.919 07:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:08.177 00:16:08.178 07:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:08.178 07:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:08.178 07:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.178 07:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.178 07:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.178 07:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.178 07:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.178 07:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.178 07:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:08.178 { 00:16:08.178 "cntlid": 79, 00:16:08.178 "qid": 0, 00:16:08.178 "state": "enabled", 00:16:08.178 "thread": "nvmf_tgt_poll_group_000", 00:16:08.178 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:08.178 "listen_address": { 00:16:08.178 "trtype": "TCP", 00:16:08.178 "adrfam": "IPv4", 00:16:08.178 "traddr": "10.0.0.2", 00:16:08.178 "trsvcid": "4420" 00:16:08.178 }, 00:16:08.178 "peer_address": { 00:16:08.178 "trtype": "TCP", 00:16:08.178 "adrfam": "IPv4", 00:16:08.178 "traddr": "10.0.0.1", 00:16:08.178 "trsvcid": "37376" 00:16:08.178 }, 00:16:08.178 "auth": { 00:16:08.178 "state": "completed", 00:16:08.178 "digest": "sha384", 00:16:08.178 "dhgroup": "ffdhe4096" 00:16:08.178 } 00:16:08.178 } 00:16:08.178 ]' 00:16:08.178 07:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:08.435 07:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:08.435 07:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:08.435 07:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:08.435 07:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:08.435 07:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.435 07:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.436 07:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.694 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGM4MmQ3NGU2MzQ2YmQ2NWNlY2YyY2MxM2RlOTRmNzE1OTZlYzAxYzFiNzg2NDNiNjgwNjRkYzI0NmVhZjRkOULdqeA=: 00:16:08.694 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGM4MmQ3NGU2MzQ2YmQ2NWNlY2YyY2MxM2RlOTRmNzE1OTZlYzAxYzFiNzg2NDNiNjgwNjRkYzI0NmVhZjRkOULdqeA=: 00:16:09.260 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.260 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.260 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:09.261 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.261 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.261 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.261 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:09.261 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.261 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:09.261 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:09.519 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:09.519 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.519 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:09.519 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:09.519 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:09.519 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.519 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.519 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.519 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.519 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.519 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.519 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.520 07:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.778 00:16:09.778 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.778 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.778 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.036 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.036 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.036 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.036 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.036 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.036 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.036 { 00:16:10.036 "cntlid": 81, 00:16:10.036 "qid": 0, 00:16:10.036 "state": "enabled", 00:16:10.036 "thread": "nvmf_tgt_poll_group_000", 00:16:10.036 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:10.036 "listen_address": { 00:16:10.036 "trtype": "TCP", 00:16:10.036 "adrfam": "IPv4", 00:16:10.036 "traddr": "10.0.0.2", 00:16:10.036 "trsvcid": "4420" 00:16:10.036 }, 00:16:10.036 "peer_address": { 00:16:10.036 "trtype": "TCP", 00:16:10.036 "adrfam": "IPv4", 00:16:10.036 "traddr": "10.0.0.1", 00:16:10.036 "trsvcid": "55974" 00:16:10.036 }, 00:16:10.036 "auth": { 00:16:10.036 "state": "completed", 00:16:10.036 "digest": "sha384", 00:16:10.036 "dhgroup": "ffdhe6144" 00:16:10.036 } 00:16:10.036 } 00:16:10.036 ]' 00:16:10.036 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.036 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:10.036 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.036 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:10.036 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.295 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.295 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.295 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.295 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzBlYzBkNmMzN2RlYTJkNDgzMjUyYzU1ZDA0NDFjZTY5MjhlNmU4N2VhZmRhYTZjlv9xVQ==: --dhchap-ctrl-secret DHHC-1:03:ODkzZTg5ODQ0MzllMDlkOWZhNmM2ZDgwNjlkYWVmNGQ5ZjIzZWMxYmZiYmI0YmIxYjI5NGRjOWQyNjQzMDVlMmvFeQQ=: 00:16:10.295 07:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzBlYzBkNmMzN2RlYTJkNDgzMjUyYzU1ZDA0NDFjZTY5MjhlNmU4N2VhZmRhYTZjlv9xVQ==: --dhchap-ctrl-secret DHHC-1:03:ODkzZTg5ODQ0MzllMDlkOWZhNmM2ZDgwNjlkYWVmNGQ5ZjIzZWMxYmZiYmI0YmIxYjI5NGRjOWQyNjQzMDVlMmvFeQQ=: 00:16:10.861 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.861 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:10.861 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.861 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.861 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.861 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:10.861 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:10.861 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:11.148 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:11.148 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.148 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:11.148 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:11.148 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:11.148 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.148 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.148 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.148 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.148 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.148 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.148 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.148 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.407 00:16:11.407 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.407 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.407 07:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.665 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.665 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.665 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.665 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.665 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.665 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:11.665 { 00:16:11.665 "cntlid": 83, 00:16:11.665 "qid": 0, 00:16:11.665 "state": "enabled", 00:16:11.665 "thread": "nvmf_tgt_poll_group_000", 00:16:11.665 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:11.665 "listen_address": { 00:16:11.665 "trtype": "TCP", 00:16:11.665 "adrfam": "IPv4", 00:16:11.665 "traddr": "10.0.0.2", 00:16:11.665 "trsvcid": "4420" 00:16:11.665 }, 00:16:11.665 "peer_address": { 00:16:11.665 "trtype": "TCP", 00:16:11.665 "adrfam": "IPv4", 00:16:11.665 "traddr": "10.0.0.1", 00:16:11.665 "trsvcid": "55996" 00:16:11.665 }, 00:16:11.665 "auth": { 00:16:11.665 "state": "completed", 00:16:11.665 "digest": "sha384", 00:16:11.665 "dhgroup": "ffdhe6144" 00:16:11.665 } 00:16:11.665 } 00:16:11.665 ]' 00:16:11.666 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:11.666 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:11.666 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:11.666 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:11.666 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:11.924 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.924 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.924 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.924 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDBkMWVlYWU0ZjI0YTQ1MjFiMzdlZTIwNDBjYWQzYznoBVG8: --dhchap-ctrl-secret DHHC-1:02:OTBiYzU5ZDljZDkxY2Q0MDFjZDU4OGY0ZDBlMTY0YzU5YzM5NzkxNmVkMzkzNzk4tQX+Ug==: 00:16:11.924 07:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDBkMWVlYWU0ZjI0YTQ1MjFiMzdlZTIwNDBjYWQzYznoBVG8: --dhchap-ctrl-secret DHHC-1:02:OTBiYzU5ZDljZDkxY2Q0MDFjZDU4OGY0ZDBlMTY0YzU5YzM5NzkxNmVkMzkzNzk4tQX+Ug==: 00:16:12.490 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.490 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:12.490 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.490 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.748 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.748 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:12.748 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:12.748 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:12.748 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:12.748 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:12.748 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:12.748 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:12.748 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:12.749 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.749 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.749 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.749 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.749 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.749 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.749 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.749 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.316 00:16:13.316 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.316 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:13.316 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.316 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.316 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.316 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.316 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.316 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.316 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:13.316 { 00:16:13.316 "cntlid": 85, 00:16:13.316 "qid": 0, 00:16:13.316 "state": "enabled", 00:16:13.316 "thread": "nvmf_tgt_poll_group_000", 00:16:13.316 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:13.316 "listen_address": { 00:16:13.316 "trtype": "TCP", 00:16:13.316 "adrfam": "IPv4", 00:16:13.316 "traddr": "10.0.0.2", 00:16:13.316 "trsvcid": "4420" 00:16:13.316 }, 00:16:13.316 "peer_address": { 00:16:13.316 "trtype": "TCP", 00:16:13.316 "adrfam": "IPv4", 00:16:13.316 "traddr": "10.0.0.1", 00:16:13.316 "trsvcid": "56018" 00:16:13.316 }, 00:16:13.316 "auth": { 00:16:13.316 "state": "completed", 00:16:13.316 "digest": "sha384", 00:16:13.316 "dhgroup": "ffdhe6144" 00:16:13.316 } 00:16:13.316 } 00:16:13.316 ]' 00:16:13.316 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:13.573 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:13.573 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:13.573 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:13.573 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:13.573 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.573 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.573 07:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.832 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTU2NGU3ZmU2MzEwYzMzMjBiZTFkMTQ3YWE4NGRjMDFmYjI0MmQxNGIzNzY1NDJl8zg9sg==: --dhchap-ctrl-secret DHHC-1:01:MGZjZTY1ZTQ1MjYwODZkNmRlMjc0NjRiZGU4MzllNWGLuOVs: 00:16:13.832 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTU2NGU3ZmU2MzEwYzMzMjBiZTFkMTQ3YWE4NGRjMDFmYjI0MmQxNGIzNzY1NDJl8zg9sg==: --dhchap-ctrl-secret DHHC-1:01:MGZjZTY1ZTQ1MjYwODZkNmRlMjc0NjRiZGU4MzllNWGLuOVs: 00:16:14.398 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.398 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.398 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:14.398 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.398 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.398 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.398 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:14.398 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:14.398 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:14.657 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:14.657 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.657 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:14.657 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:14.657 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:14.657 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.657 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:14.657 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.657 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.657 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.657 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:14.657 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:14.657 07:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:14.914 00:16:14.914 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.914 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.914 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.172 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.172 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.172 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.172 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.172 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.172 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.172 { 00:16:15.172 "cntlid": 87, 00:16:15.172 "qid": 0, 00:16:15.172 "state": "enabled", 00:16:15.172 "thread": "nvmf_tgt_poll_group_000", 00:16:15.172 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:15.172 "listen_address": { 00:16:15.172 "trtype": "TCP", 00:16:15.172 "adrfam": "IPv4", 00:16:15.172 "traddr": "10.0.0.2", 00:16:15.172 "trsvcid": "4420" 00:16:15.172 }, 00:16:15.172 "peer_address": { 00:16:15.172 "trtype": "TCP", 00:16:15.172 "adrfam": "IPv4", 00:16:15.172 "traddr": "10.0.0.1", 00:16:15.172 "trsvcid": "56034" 00:16:15.172 }, 00:16:15.172 "auth": { 00:16:15.172 "state": "completed", 00:16:15.172 "digest": "sha384", 00:16:15.172 "dhgroup": "ffdhe6144" 00:16:15.172 } 00:16:15.172 } 00:16:15.172 ]' 00:16:15.172 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.172 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:15.172 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.172 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:15.172 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.172 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.172 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.172 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.431 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGM4MmQ3NGU2MzQ2YmQ2NWNlY2YyY2MxM2RlOTRmNzE1OTZlYzAxYzFiNzg2NDNiNjgwNjRkYzI0NmVhZjRkOULdqeA=: 00:16:15.431 07:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGM4MmQ3NGU2MzQ2YmQ2NWNlY2YyY2MxM2RlOTRmNzE1OTZlYzAxYzFiNzg2NDNiNjgwNjRkYzI0NmVhZjRkOULdqeA=: 00:16:15.997 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.997 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:15.997 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.997 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.997 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.997 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:15.997 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.997 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:15.997 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:16.255 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:16.255 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.255 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:16.255 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:16.255 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:16.255 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.255 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.255 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.255 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.255 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.255 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.255 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.255 07:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.821 00:16:16.821 07:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.822 07:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.822 07:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.822 07:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.822 07:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.822 07:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.822 07:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.080 07:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.080 07:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.080 { 00:16:17.080 "cntlid": 89, 00:16:17.080 "qid": 0, 00:16:17.080 "state": "enabled", 00:16:17.080 "thread": "nvmf_tgt_poll_group_000", 00:16:17.080 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:17.080 "listen_address": { 00:16:17.080 "trtype": "TCP", 00:16:17.080 "adrfam": "IPv4", 00:16:17.080 "traddr": "10.0.0.2", 00:16:17.080 "trsvcid": "4420" 00:16:17.080 }, 00:16:17.080 "peer_address": { 00:16:17.080 "trtype": "TCP", 00:16:17.080 "adrfam": "IPv4", 00:16:17.080 "traddr": "10.0.0.1", 00:16:17.080 "trsvcid": "56058" 00:16:17.080 }, 00:16:17.080 "auth": { 00:16:17.080 "state": "completed", 00:16:17.080 "digest": "sha384", 00:16:17.080 "dhgroup": "ffdhe8192" 00:16:17.080 } 00:16:17.080 } 00:16:17.080 ]' 00:16:17.080 07:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.080 07:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:17.080 07:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.080 07:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:17.080 07:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.080 07:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.080 07:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.080 07:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.338 07:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzBlYzBkNmMzN2RlYTJkNDgzMjUyYzU1ZDA0NDFjZTY5MjhlNmU4N2VhZmRhYTZjlv9xVQ==: --dhchap-ctrl-secret DHHC-1:03:ODkzZTg5ODQ0MzllMDlkOWZhNmM2ZDgwNjlkYWVmNGQ5ZjIzZWMxYmZiYmI0YmIxYjI5NGRjOWQyNjQzMDVlMmvFeQQ=: 00:16:17.338 07:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzBlYzBkNmMzN2RlYTJkNDgzMjUyYzU1ZDA0NDFjZTY5MjhlNmU4N2VhZmRhYTZjlv9xVQ==: --dhchap-ctrl-secret DHHC-1:03:ODkzZTg5ODQ0MzllMDlkOWZhNmM2ZDgwNjlkYWVmNGQ5ZjIzZWMxYmZiYmI0YmIxYjI5NGRjOWQyNjQzMDVlMmvFeQQ=: 00:16:17.903 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.903 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:17.903 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.903 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.903 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.903 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.903 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:17.904 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:18.162 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:18.162 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.162 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:18.162 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:18.162 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:18.162 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.162 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.162 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.162 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.162 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.162 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.162 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.162 07:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.730 00:16:18.730 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.730 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.730 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.730 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.730 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.730 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.730 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.730 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.730 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.730 { 00:16:18.730 "cntlid": 91, 00:16:18.730 "qid": 0, 00:16:18.730 "state": "enabled", 00:16:18.730 "thread": "nvmf_tgt_poll_group_000", 00:16:18.730 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:18.730 "listen_address": { 00:16:18.730 "trtype": "TCP", 00:16:18.730 "adrfam": "IPv4", 00:16:18.730 "traddr": "10.0.0.2", 00:16:18.730 "trsvcid": "4420" 00:16:18.730 }, 00:16:18.730 "peer_address": { 00:16:18.730 "trtype": "TCP", 00:16:18.730 "adrfam": "IPv4", 00:16:18.730 "traddr": "10.0.0.1", 00:16:18.730 "trsvcid": "56098" 00:16:18.730 }, 00:16:18.730 "auth": { 00:16:18.730 "state": "completed", 00:16:18.730 "digest": "sha384", 00:16:18.730 "dhgroup": "ffdhe8192" 00:16:18.730 } 00:16:18.730 } 00:16:18.730 ]' 00:16:18.730 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.730 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:18.988 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.988 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:18.988 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.988 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.988 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.988 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.247 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDBkMWVlYWU0ZjI0YTQ1MjFiMzdlZTIwNDBjYWQzYznoBVG8: --dhchap-ctrl-secret DHHC-1:02:OTBiYzU5ZDljZDkxY2Q0MDFjZDU4OGY0ZDBlMTY0YzU5YzM5NzkxNmVkMzkzNzk4tQX+Ug==: 00:16:19.247 07:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDBkMWVlYWU0ZjI0YTQ1MjFiMzdlZTIwNDBjYWQzYznoBVG8: --dhchap-ctrl-secret DHHC-1:02:OTBiYzU5ZDljZDkxY2Q0MDFjZDU4OGY0ZDBlMTY0YzU5YzM5NzkxNmVkMzkzNzk4tQX+Ug==: 00:16:19.814 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.814 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:19.814 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.814 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.814 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.814 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.814 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:19.814 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:19.814 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:19.814 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.814 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:19.814 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:19.814 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:19.814 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.814 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.814 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.814 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.814 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.814 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.814 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.814 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.380 00:16:20.380 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.380 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.380 07:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.637 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.638 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.638 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.638 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.638 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.638 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.638 { 00:16:20.638 "cntlid": 93, 00:16:20.638 "qid": 0, 00:16:20.638 "state": "enabled", 00:16:20.638 "thread": "nvmf_tgt_poll_group_000", 00:16:20.638 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:20.638 "listen_address": { 00:16:20.638 "trtype": "TCP", 00:16:20.638 "adrfam": "IPv4", 00:16:20.638 "traddr": "10.0.0.2", 00:16:20.638 "trsvcid": "4420" 00:16:20.638 }, 00:16:20.638 "peer_address": { 00:16:20.638 "trtype": "TCP", 00:16:20.638 "adrfam": "IPv4", 00:16:20.638 "traddr": "10.0.0.1", 00:16:20.638 "trsvcid": "56074" 00:16:20.638 }, 00:16:20.638 "auth": { 00:16:20.638 "state": "completed", 00:16:20.638 "digest": "sha384", 00:16:20.638 "dhgroup": "ffdhe8192" 00:16:20.638 } 00:16:20.638 } 00:16:20.638 ]' 00:16:20.638 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.638 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:20.638 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.638 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:20.638 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.638 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.638 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.638 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.896 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTU2NGU3ZmU2MzEwYzMzMjBiZTFkMTQ3YWE4NGRjMDFmYjI0MmQxNGIzNzY1NDJl8zg9sg==: --dhchap-ctrl-secret DHHC-1:01:MGZjZTY1ZTQ1MjYwODZkNmRlMjc0NjRiZGU4MzllNWGLuOVs: 00:16:20.896 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTU2NGU3ZmU2MzEwYzMzMjBiZTFkMTQ3YWE4NGRjMDFmYjI0MmQxNGIzNzY1NDJl8zg9sg==: --dhchap-ctrl-secret DHHC-1:01:MGZjZTY1ZTQ1MjYwODZkNmRlMjc0NjRiZGU4MzllNWGLuOVs: 00:16:21.462 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.462 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:21.462 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.462 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.462 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.462 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.462 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:21.462 07:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:21.721 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:21.721 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.721 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:21.721 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:21.721 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:21.721 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.721 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:21.721 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.721 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.721 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.721 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:21.721 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:21.721 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:22.288 00:16:22.288 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.288 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.288 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.547 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.547 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.547 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.547 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.547 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.547 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.547 { 00:16:22.547 "cntlid": 95, 00:16:22.547 "qid": 0, 00:16:22.547 "state": "enabled", 00:16:22.547 "thread": "nvmf_tgt_poll_group_000", 00:16:22.547 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:22.547 "listen_address": { 00:16:22.547 "trtype": "TCP", 00:16:22.547 "adrfam": "IPv4", 00:16:22.547 "traddr": "10.0.0.2", 00:16:22.547 "trsvcid": "4420" 00:16:22.547 }, 00:16:22.547 "peer_address": { 00:16:22.547 "trtype": "TCP", 00:16:22.547 "adrfam": "IPv4", 00:16:22.547 "traddr": "10.0.0.1", 00:16:22.547 "trsvcid": "56118" 00:16:22.547 }, 00:16:22.547 "auth": { 00:16:22.547 "state": "completed", 00:16:22.547 "digest": "sha384", 00:16:22.547 "dhgroup": "ffdhe8192" 00:16:22.547 } 00:16:22.547 } 00:16:22.547 ]' 00:16:22.547 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.547 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:22.547 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.547 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:22.547 07:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.547 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.547 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.547 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.805 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGM4MmQ3NGU2MzQ2YmQ2NWNlY2YyY2MxM2RlOTRmNzE1OTZlYzAxYzFiNzg2NDNiNjgwNjRkYzI0NmVhZjRkOULdqeA=: 00:16:22.805 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGM4MmQ3NGU2MzQ2YmQ2NWNlY2YyY2MxM2RlOTRmNzE1OTZlYzAxYzFiNzg2NDNiNjgwNjRkYzI0NmVhZjRkOULdqeA=: 00:16:23.373 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.373 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:23.373 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.373 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.373 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.373 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:23.373 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:23.373 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.373 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:23.373 07:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:23.631 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:23.631 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.631 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:23.631 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:23.631 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:23.631 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.631 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.631 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.631 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.631 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.631 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.631 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.631 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.976 00:16:23.976 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.976 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.976 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.976 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.976 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.976 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.976 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.976 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.976 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.976 { 00:16:23.976 "cntlid": 97, 00:16:23.976 "qid": 0, 00:16:23.976 "state": "enabled", 00:16:23.976 "thread": "nvmf_tgt_poll_group_000", 00:16:23.976 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:23.976 "listen_address": { 00:16:23.976 "trtype": "TCP", 00:16:23.976 "adrfam": "IPv4", 00:16:23.976 "traddr": "10.0.0.2", 00:16:23.976 "trsvcid": "4420" 00:16:23.976 }, 00:16:23.976 "peer_address": { 00:16:23.976 "trtype": "TCP", 00:16:23.976 "adrfam": "IPv4", 00:16:23.977 "traddr": "10.0.0.1", 00:16:23.977 "trsvcid": "56142" 00:16:23.977 }, 00:16:23.977 "auth": { 00:16:23.977 "state": "completed", 00:16:23.977 "digest": "sha512", 00:16:23.977 "dhgroup": "null" 00:16:23.977 } 00:16:23.977 } 00:16:23.977 ]' 00:16:23.977 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.235 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:24.235 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.235 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:24.235 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.235 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.235 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.235 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.493 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzBlYzBkNmMzN2RlYTJkNDgzMjUyYzU1ZDA0NDFjZTY5MjhlNmU4N2VhZmRhYTZjlv9xVQ==: --dhchap-ctrl-secret DHHC-1:03:ODkzZTg5ODQ0MzllMDlkOWZhNmM2ZDgwNjlkYWVmNGQ5ZjIzZWMxYmZiYmI0YmIxYjI5NGRjOWQyNjQzMDVlMmvFeQQ=: 00:16:24.493 07:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzBlYzBkNmMzN2RlYTJkNDgzMjUyYzU1ZDA0NDFjZTY5MjhlNmU4N2VhZmRhYTZjlv9xVQ==: --dhchap-ctrl-secret DHHC-1:03:ODkzZTg5ODQ0MzllMDlkOWZhNmM2ZDgwNjlkYWVmNGQ5ZjIzZWMxYmZiYmI0YmIxYjI5NGRjOWQyNjQzMDVlMmvFeQQ=: 00:16:25.060 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.060 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.060 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:25.060 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.060 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.060 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.060 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.060 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:25.060 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:25.060 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:25.060 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.060 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:25.060 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:25.060 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:25.060 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.060 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.060 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.060 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.060 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.060 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.060 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.060 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.318 00:16:25.318 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.318 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.318 07:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.576 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.576 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.576 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.577 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.577 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.577 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.577 { 00:16:25.577 "cntlid": 99, 00:16:25.577 "qid": 0, 00:16:25.577 "state": "enabled", 00:16:25.577 "thread": "nvmf_tgt_poll_group_000", 00:16:25.577 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:25.577 "listen_address": { 00:16:25.577 "trtype": "TCP", 00:16:25.577 "adrfam": "IPv4", 00:16:25.577 "traddr": "10.0.0.2", 00:16:25.577 "trsvcid": "4420" 00:16:25.577 }, 00:16:25.577 "peer_address": { 00:16:25.577 "trtype": "TCP", 00:16:25.577 "adrfam": "IPv4", 00:16:25.577 "traddr": "10.0.0.1", 00:16:25.577 "trsvcid": "56174" 00:16:25.577 }, 00:16:25.577 "auth": { 00:16:25.577 "state": "completed", 00:16:25.577 "digest": "sha512", 00:16:25.577 "dhgroup": "null" 00:16:25.577 } 00:16:25.577 } 00:16:25.577 ]' 00:16:25.577 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.577 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:25.577 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.577 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:25.577 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.835 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.835 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.835 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.835 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDBkMWVlYWU0ZjI0YTQ1MjFiMzdlZTIwNDBjYWQzYznoBVG8: --dhchap-ctrl-secret DHHC-1:02:OTBiYzU5ZDljZDkxY2Q0MDFjZDU4OGY0ZDBlMTY0YzU5YzM5NzkxNmVkMzkzNzk4tQX+Ug==: 00:16:25.835 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDBkMWVlYWU0ZjI0YTQ1MjFiMzdlZTIwNDBjYWQzYznoBVG8: --dhchap-ctrl-secret DHHC-1:02:OTBiYzU5ZDljZDkxY2Q0MDFjZDU4OGY0ZDBlMTY0YzU5YzM5NzkxNmVkMzkzNzk4tQX+Ug==: 00:16:26.401 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.401 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:26.401 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.401 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.660 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.660 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.660 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:26.660 07:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:26.660 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:16:26.660 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.660 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:26.660 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:26.660 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:26.660 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.660 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.660 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.660 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.660 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.660 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.660 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.660 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.918 00:16:26.918 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.918 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.918 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.177 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.177 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.177 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.177 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.177 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.177 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.177 { 00:16:27.177 "cntlid": 101, 00:16:27.177 "qid": 0, 00:16:27.177 "state": "enabled", 00:16:27.177 "thread": "nvmf_tgt_poll_group_000", 00:16:27.177 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:27.177 "listen_address": { 00:16:27.177 "trtype": "TCP", 00:16:27.177 "adrfam": "IPv4", 00:16:27.177 "traddr": "10.0.0.2", 00:16:27.177 "trsvcid": "4420" 00:16:27.177 }, 00:16:27.177 "peer_address": { 00:16:27.177 "trtype": "TCP", 00:16:27.177 "adrfam": "IPv4", 00:16:27.177 "traddr": "10.0.0.1", 00:16:27.177 "trsvcid": "56186" 00:16:27.177 }, 00:16:27.177 "auth": { 00:16:27.177 "state": "completed", 00:16:27.177 "digest": "sha512", 00:16:27.177 "dhgroup": "null" 00:16:27.177 } 00:16:27.177 } 00:16:27.177 ]' 00:16:27.177 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.177 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:27.177 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.177 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:27.177 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.436 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.436 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.436 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.436 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTU2NGU3ZmU2MzEwYzMzMjBiZTFkMTQ3YWE4NGRjMDFmYjI0MmQxNGIzNzY1NDJl8zg9sg==: --dhchap-ctrl-secret DHHC-1:01:MGZjZTY1ZTQ1MjYwODZkNmRlMjc0NjRiZGU4MzllNWGLuOVs: 00:16:27.436 07:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTU2NGU3ZmU2MzEwYzMzMjBiZTFkMTQ3YWE4NGRjMDFmYjI0MmQxNGIzNzY1NDJl8zg9sg==: --dhchap-ctrl-secret DHHC-1:01:MGZjZTY1ZTQ1MjYwODZkNmRlMjc0NjRiZGU4MzllNWGLuOVs: 00:16:28.002 07:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.002 07:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:28.002 07:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.002 07:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.002 07:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.002 07:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.002 07:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:28.002 07:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:28.260 07:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:16:28.260 07:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.260 07:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:28.260 07:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:28.260 07:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:28.260 07:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.260 07:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:28.260 07:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.260 07:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.261 07:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.261 07:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:28.261 07:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:28.261 07:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:28.519 00:16:28.519 07:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.519 07:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.519 07:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.777 07:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.777 07:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.777 07:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.777 07:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.777 07:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.777 07:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.777 { 00:16:28.777 "cntlid": 103, 00:16:28.777 "qid": 0, 00:16:28.777 "state": "enabled", 00:16:28.777 "thread": "nvmf_tgt_poll_group_000", 00:16:28.777 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:28.777 "listen_address": { 00:16:28.777 "trtype": "TCP", 00:16:28.777 "adrfam": "IPv4", 00:16:28.777 "traddr": "10.0.0.2", 00:16:28.777 "trsvcid": "4420" 00:16:28.777 }, 00:16:28.777 "peer_address": { 00:16:28.777 "trtype": "TCP", 00:16:28.777 "adrfam": "IPv4", 00:16:28.777 "traddr": "10.0.0.1", 00:16:28.777 "trsvcid": "56222" 00:16:28.777 }, 00:16:28.777 "auth": { 00:16:28.777 "state": "completed", 00:16:28.777 "digest": "sha512", 00:16:28.777 "dhgroup": "null" 00:16:28.777 } 00:16:28.777 } 00:16:28.777 ]' 00:16:28.777 07:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.777 07:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:28.777 07:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.777 07:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:28.777 07:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.035 07:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.035 07:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.035 07:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.035 07:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGM4MmQ3NGU2MzQ2YmQ2NWNlY2YyY2MxM2RlOTRmNzE1OTZlYzAxYzFiNzg2NDNiNjgwNjRkYzI0NmVhZjRkOULdqeA=: 00:16:29.036 07:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGM4MmQ3NGU2MzQ2YmQ2NWNlY2YyY2MxM2RlOTRmNzE1OTZlYzAxYzFiNzg2NDNiNjgwNjRkYzI0NmVhZjRkOULdqeA=: 00:16:29.601 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.601 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.602 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:29.602 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.602 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.602 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.602 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:29.602 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.602 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:29.602 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:29.860 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:16:29.861 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.861 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:29.861 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:29.861 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:29.861 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.861 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.861 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.861 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.861 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.861 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.861 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.861 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.119 00:16:30.119 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.119 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.119 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.377 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.377 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.377 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.377 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.377 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.377 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.377 { 00:16:30.377 "cntlid": 105, 00:16:30.377 "qid": 0, 00:16:30.377 "state": "enabled", 00:16:30.377 "thread": "nvmf_tgt_poll_group_000", 00:16:30.377 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:30.377 "listen_address": { 00:16:30.377 "trtype": "TCP", 00:16:30.377 "adrfam": "IPv4", 00:16:30.377 "traddr": "10.0.0.2", 00:16:30.377 "trsvcid": "4420" 00:16:30.377 }, 00:16:30.377 "peer_address": { 00:16:30.377 "trtype": "TCP", 00:16:30.377 "adrfam": "IPv4", 00:16:30.377 "traddr": "10.0.0.1", 00:16:30.377 "trsvcid": "56088" 00:16:30.377 }, 00:16:30.377 "auth": { 00:16:30.377 "state": "completed", 00:16:30.377 "digest": "sha512", 00:16:30.377 "dhgroup": "ffdhe2048" 00:16:30.377 } 00:16:30.377 } 00:16:30.377 ]' 00:16:30.377 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.377 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:30.377 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.377 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:30.377 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.635 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.635 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.635 07:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.635 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzBlYzBkNmMzN2RlYTJkNDgzMjUyYzU1ZDA0NDFjZTY5MjhlNmU4N2VhZmRhYTZjlv9xVQ==: --dhchap-ctrl-secret DHHC-1:03:ODkzZTg5ODQ0MzllMDlkOWZhNmM2ZDgwNjlkYWVmNGQ5ZjIzZWMxYmZiYmI0YmIxYjI5NGRjOWQyNjQzMDVlMmvFeQQ=: 00:16:30.635 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzBlYzBkNmMzN2RlYTJkNDgzMjUyYzU1ZDA0NDFjZTY5MjhlNmU4N2VhZmRhYTZjlv9xVQ==: --dhchap-ctrl-secret DHHC-1:03:ODkzZTg5ODQ0MzllMDlkOWZhNmM2ZDgwNjlkYWVmNGQ5ZjIzZWMxYmZiYmI0YmIxYjI5NGRjOWQyNjQzMDVlMmvFeQQ=: 00:16:31.201 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.459 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:31.459 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.459 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.459 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.459 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.459 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:31.459 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:31.459 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:16:31.460 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.460 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:31.460 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:31.460 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:31.460 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.460 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.460 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.460 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.460 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.460 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.460 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.460 07:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.718 00:16:31.718 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.718 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.718 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.976 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.976 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.976 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.976 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.976 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.976 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.976 { 00:16:31.976 "cntlid": 107, 00:16:31.976 "qid": 0, 00:16:31.976 "state": "enabled", 00:16:31.976 "thread": "nvmf_tgt_poll_group_000", 00:16:31.976 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:31.976 "listen_address": { 00:16:31.977 "trtype": "TCP", 00:16:31.977 "adrfam": "IPv4", 00:16:31.977 "traddr": "10.0.0.2", 00:16:31.977 "trsvcid": "4420" 00:16:31.977 }, 00:16:31.977 "peer_address": { 00:16:31.977 "trtype": "TCP", 00:16:31.977 "adrfam": "IPv4", 00:16:31.977 "traddr": "10.0.0.1", 00:16:31.977 "trsvcid": "56112" 00:16:31.977 }, 00:16:31.977 "auth": { 00:16:31.977 "state": "completed", 00:16:31.977 "digest": "sha512", 00:16:31.977 "dhgroup": "ffdhe2048" 00:16:31.977 } 00:16:31.977 } 00:16:31.977 ]' 00:16:31.977 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.977 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:31.977 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.235 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:32.235 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.235 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.235 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.235 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.494 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDBkMWVlYWU0ZjI0YTQ1MjFiMzdlZTIwNDBjYWQzYznoBVG8: --dhchap-ctrl-secret DHHC-1:02:OTBiYzU5ZDljZDkxY2Q0MDFjZDU4OGY0ZDBlMTY0YzU5YzM5NzkxNmVkMzkzNzk4tQX+Ug==: 00:16:32.494 07:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDBkMWVlYWU0ZjI0YTQ1MjFiMzdlZTIwNDBjYWQzYznoBVG8: --dhchap-ctrl-secret DHHC-1:02:OTBiYzU5ZDljZDkxY2Q0MDFjZDU4OGY0ZDBlMTY0YzU5YzM5NzkxNmVkMzkzNzk4tQX+Ug==: 00:16:33.061 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.061 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:33.061 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.061 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.061 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.061 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.061 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:33.061 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:33.061 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:33.061 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.061 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:33.061 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:33.061 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:33.061 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.061 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.061 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.061 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.061 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.061 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.061 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.061 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.319 00:16:33.319 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.319 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.319 07:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.578 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.578 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.578 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.578 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.578 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.578 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.578 { 00:16:33.578 "cntlid": 109, 00:16:33.578 "qid": 0, 00:16:33.578 "state": "enabled", 00:16:33.578 "thread": "nvmf_tgt_poll_group_000", 00:16:33.578 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:33.578 "listen_address": { 00:16:33.578 "trtype": "TCP", 00:16:33.578 "adrfam": "IPv4", 00:16:33.578 "traddr": "10.0.0.2", 00:16:33.578 "trsvcid": "4420" 00:16:33.578 }, 00:16:33.578 "peer_address": { 00:16:33.578 "trtype": "TCP", 00:16:33.578 "adrfam": "IPv4", 00:16:33.578 "traddr": "10.0.0.1", 00:16:33.578 "trsvcid": "56132" 00:16:33.578 }, 00:16:33.578 "auth": { 00:16:33.578 "state": "completed", 00:16:33.578 "digest": "sha512", 00:16:33.578 "dhgroup": "ffdhe2048" 00:16:33.578 } 00:16:33.578 } 00:16:33.578 ]' 00:16:33.578 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.578 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:33.578 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.578 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:33.836 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.836 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.836 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.836 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.836 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTU2NGU3ZmU2MzEwYzMzMjBiZTFkMTQ3YWE4NGRjMDFmYjI0MmQxNGIzNzY1NDJl8zg9sg==: --dhchap-ctrl-secret DHHC-1:01:MGZjZTY1ZTQ1MjYwODZkNmRlMjc0NjRiZGU4MzllNWGLuOVs: 00:16:33.836 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTU2NGU3ZmU2MzEwYzMzMjBiZTFkMTQ3YWE4NGRjMDFmYjI0MmQxNGIzNzY1NDJl8zg9sg==: --dhchap-ctrl-secret DHHC-1:01:MGZjZTY1ZTQ1MjYwODZkNmRlMjc0NjRiZGU4MzllNWGLuOVs: 00:16:34.403 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.403 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:34.403 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.403 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.661 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.661 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.661 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:34.661 07:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:34.661 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:34.661 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.661 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:34.661 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:34.661 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:34.661 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.661 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:34.661 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.661 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.661 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.661 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:34.661 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:34.661 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:34.919 00:16:34.919 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.919 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.919 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.177 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.177 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.177 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.177 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.177 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.177 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.177 { 00:16:35.177 "cntlid": 111, 00:16:35.177 "qid": 0, 00:16:35.177 "state": "enabled", 00:16:35.177 "thread": "nvmf_tgt_poll_group_000", 00:16:35.177 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:35.177 "listen_address": { 00:16:35.177 "trtype": "TCP", 00:16:35.177 "adrfam": "IPv4", 00:16:35.177 "traddr": "10.0.0.2", 00:16:35.177 "trsvcid": "4420" 00:16:35.177 }, 00:16:35.177 "peer_address": { 00:16:35.177 "trtype": "TCP", 00:16:35.177 "adrfam": "IPv4", 00:16:35.177 "traddr": "10.0.0.1", 00:16:35.177 "trsvcid": "56162" 00:16:35.177 }, 00:16:35.177 "auth": { 00:16:35.177 "state": "completed", 00:16:35.177 "digest": "sha512", 00:16:35.177 "dhgroup": "ffdhe2048" 00:16:35.177 } 00:16:35.177 } 00:16:35.177 ]' 00:16:35.177 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.177 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:35.177 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.472 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:35.472 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.472 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.472 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.472 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.472 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGM4MmQ3NGU2MzQ2YmQ2NWNlY2YyY2MxM2RlOTRmNzE1OTZlYzAxYzFiNzg2NDNiNjgwNjRkYzI0NmVhZjRkOULdqeA=: 00:16:35.472 07:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGM4MmQ3NGU2MzQ2YmQ2NWNlY2YyY2MxM2RlOTRmNzE1OTZlYzAxYzFiNzg2NDNiNjgwNjRkYzI0NmVhZjRkOULdqeA=: 00:16:36.059 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.059 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.059 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:36.059 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.059 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.059 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.059 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:36.059 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.059 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:36.059 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:36.318 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:36.318 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.318 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:36.318 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:36.318 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:36.318 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.318 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.318 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.318 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.318 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.318 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.318 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.318 07:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.577 00:16:36.577 07:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.577 07:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.577 07:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.835 07:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.835 07:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.835 07:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.835 07:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.835 07:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.835 07:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.835 { 00:16:36.835 "cntlid": 113, 00:16:36.835 "qid": 0, 00:16:36.835 "state": "enabled", 00:16:36.835 "thread": "nvmf_tgt_poll_group_000", 00:16:36.835 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:36.835 "listen_address": { 00:16:36.835 "trtype": "TCP", 00:16:36.836 "adrfam": "IPv4", 00:16:36.836 "traddr": "10.0.0.2", 00:16:36.836 "trsvcid": "4420" 00:16:36.836 }, 00:16:36.836 "peer_address": { 00:16:36.836 "trtype": "TCP", 00:16:36.836 "adrfam": "IPv4", 00:16:36.836 "traddr": "10.0.0.1", 00:16:36.836 "trsvcid": "56172" 00:16:36.836 }, 00:16:36.836 "auth": { 00:16:36.836 "state": "completed", 00:16:36.836 "digest": "sha512", 00:16:36.836 "dhgroup": "ffdhe3072" 00:16:36.836 } 00:16:36.836 } 00:16:36.836 ]' 00:16:36.836 07:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.836 07:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:36.836 07:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.836 07:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:36.836 07:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.094 07:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.094 07:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.094 07:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.094 07:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzBlYzBkNmMzN2RlYTJkNDgzMjUyYzU1ZDA0NDFjZTY5MjhlNmU4N2VhZmRhYTZjlv9xVQ==: --dhchap-ctrl-secret DHHC-1:03:ODkzZTg5ODQ0MzllMDlkOWZhNmM2ZDgwNjlkYWVmNGQ5ZjIzZWMxYmZiYmI0YmIxYjI5NGRjOWQyNjQzMDVlMmvFeQQ=: 00:16:37.094 07:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzBlYzBkNmMzN2RlYTJkNDgzMjUyYzU1ZDA0NDFjZTY5MjhlNmU4N2VhZmRhYTZjlv9xVQ==: --dhchap-ctrl-secret DHHC-1:03:ODkzZTg5ODQ0MzllMDlkOWZhNmM2ZDgwNjlkYWVmNGQ5ZjIzZWMxYmZiYmI0YmIxYjI5NGRjOWQyNjQzMDVlMmvFeQQ=: 00:16:37.662 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.662 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:37.662 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.662 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.920 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.920 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.920 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:37.920 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:37.920 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:37.920 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.920 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:37.920 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:37.920 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:37.920 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.920 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.920 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.920 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.921 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.921 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.921 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.921 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.179 00:16:38.179 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.179 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.179 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.437 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.437 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.437 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.437 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.437 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.437 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.437 { 00:16:38.437 "cntlid": 115, 00:16:38.437 "qid": 0, 00:16:38.437 "state": "enabled", 00:16:38.437 "thread": "nvmf_tgt_poll_group_000", 00:16:38.437 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:38.437 "listen_address": { 00:16:38.437 "trtype": "TCP", 00:16:38.437 "adrfam": "IPv4", 00:16:38.437 "traddr": "10.0.0.2", 00:16:38.437 "trsvcid": "4420" 00:16:38.437 }, 00:16:38.437 "peer_address": { 00:16:38.437 "trtype": "TCP", 00:16:38.437 "adrfam": "IPv4", 00:16:38.437 "traddr": "10.0.0.1", 00:16:38.437 "trsvcid": "56198" 00:16:38.437 }, 00:16:38.437 "auth": { 00:16:38.437 "state": "completed", 00:16:38.437 "digest": "sha512", 00:16:38.437 "dhgroup": "ffdhe3072" 00:16:38.437 } 00:16:38.437 } 00:16:38.437 ]' 00:16:38.437 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.437 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:38.437 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.437 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:38.437 07:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.696 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.696 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.696 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.696 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDBkMWVlYWU0ZjI0YTQ1MjFiMzdlZTIwNDBjYWQzYznoBVG8: --dhchap-ctrl-secret DHHC-1:02:OTBiYzU5ZDljZDkxY2Q0MDFjZDU4OGY0ZDBlMTY0YzU5YzM5NzkxNmVkMzkzNzk4tQX+Ug==: 00:16:38.696 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDBkMWVlYWU0ZjI0YTQ1MjFiMzdlZTIwNDBjYWQzYznoBVG8: --dhchap-ctrl-secret DHHC-1:02:OTBiYzU5ZDljZDkxY2Q0MDFjZDU4OGY0ZDBlMTY0YzU5YzM5NzkxNmVkMzkzNzk4tQX+Ug==: 00:16:39.263 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.263 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:39.263 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.263 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.522 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.522 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.522 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:39.522 07:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:39.522 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:16:39.522 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.522 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:39.522 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:39.522 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:39.522 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.522 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.522 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.522 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.522 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.522 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.522 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.522 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.781 00:16:39.781 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.781 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.781 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.038 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.038 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.038 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.038 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.038 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.038 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.038 { 00:16:40.038 "cntlid": 117, 00:16:40.038 "qid": 0, 00:16:40.038 "state": "enabled", 00:16:40.038 "thread": "nvmf_tgt_poll_group_000", 00:16:40.038 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:40.038 "listen_address": { 00:16:40.038 "trtype": "TCP", 00:16:40.038 "adrfam": "IPv4", 00:16:40.038 "traddr": "10.0.0.2", 00:16:40.038 "trsvcid": "4420" 00:16:40.038 }, 00:16:40.038 "peer_address": { 00:16:40.038 "trtype": "TCP", 00:16:40.038 "adrfam": "IPv4", 00:16:40.038 "traddr": "10.0.0.1", 00:16:40.038 "trsvcid": "58030" 00:16:40.038 }, 00:16:40.038 "auth": { 00:16:40.038 "state": "completed", 00:16:40.038 "digest": "sha512", 00:16:40.038 "dhgroup": "ffdhe3072" 00:16:40.038 } 00:16:40.039 } 00:16:40.039 ]' 00:16:40.039 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.039 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:40.039 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.039 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:40.039 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.297 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.297 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.297 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.297 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTU2NGU3ZmU2MzEwYzMzMjBiZTFkMTQ3YWE4NGRjMDFmYjI0MmQxNGIzNzY1NDJl8zg9sg==: --dhchap-ctrl-secret DHHC-1:01:MGZjZTY1ZTQ1MjYwODZkNmRlMjc0NjRiZGU4MzllNWGLuOVs: 00:16:40.297 07:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTU2NGU3ZmU2MzEwYzMzMjBiZTFkMTQ3YWE4NGRjMDFmYjI0MmQxNGIzNzY1NDJl8zg9sg==: --dhchap-ctrl-secret DHHC-1:01:MGZjZTY1ZTQ1MjYwODZkNmRlMjc0NjRiZGU4MzllNWGLuOVs: 00:16:40.863 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.121 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:41.121 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.122 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.122 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.122 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.122 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:41.122 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:41.122 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:16:41.122 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.122 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:41.122 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:41.122 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:41.122 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.122 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:41.122 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.122 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.122 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.122 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:41.122 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:41.122 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:41.380 00:16:41.380 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.380 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.380 07:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.638 07:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.638 07:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.638 07:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.638 07:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.638 07:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.638 07:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.638 { 00:16:41.638 "cntlid": 119, 00:16:41.638 "qid": 0, 00:16:41.638 "state": "enabled", 00:16:41.638 "thread": "nvmf_tgt_poll_group_000", 00:16:41.638 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:41.638 "listen_address": { 00:16:41.638 "trtype": "TCP", 00:16:41.638 "adrfam": "IPv4", 00:16:41.638 "traddr": "10.0.0.2", 00:16:41.638 "trsvcid": "4420" 00:16:41.638 }, 00:16:41.638 "peer_address": { 00:16:41.638 "trtype": "TCP", 00:16:41.638 "adrfam": "IPv4", 00:16:41.638 "traddr": "10.0.0.1", 00:16:41.638 "trsvcid": "58046" 00:16:41.638 }, 00:16:41.638 "auth": { 00:16:41.638 "state": "completed", 00:16:41.638 "digest": "sha512", 00:16:41.638 "dhgroup": "ffdhe3072" 00:16:41.638 } 00:16:41.638 } 00:16:41.638 ]' 00:16:41.638 07:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.638 07:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:41.639 07:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.639 07:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:41.896 07:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.896 07:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.896 07:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.896 07:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.154 07:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGM4MmQ3NGU2MzQ2YmQ2NWNlY2YyY2MxM2RlOTRmNzE1OTZlYzAxYzFiNzg2NDNiNjgwNjRkYzI0NmVhZjRkOULdqeA=: 00:16:42.154 07:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGM4MmQ3NGU2MzQ2YmQ2NWNlY2YyY2MxM2RlOTRmNzE1OTZlYzAxYzFiNzg2NDNiNjgwNjRkYzI0NmVhZjRkOULdqeA=: 00:16:42.722 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.722 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.722 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:42.722 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.722 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.722 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.722 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:42.722 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.722 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:42.722 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:42.722 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:16:42.722 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.722 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:42.722 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:42.722 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:42.722 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.722 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.722 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.722 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.722 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.722 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.722 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.722 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.981 00:16:43.239 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.239 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.239 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.239 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.239 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.239 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.239 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.239 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.239 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.239 { 00:16:43.239 "cntlid": 121, 00:16:43.239 "qid": 0, 00:16:43.239 "state": "enabled", 00:16:43.239 "thread": "nvmf_tgt_poll_group_000", 00:16:43.239 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:43.239 "listen_address": { 00:16:43.239 "trtype": "TCP", 00:16:43.239 "adrfam": "IPv4", 00:16:43.239 "traddr": "10.0.0.2", 00:16:43.239 "trsvcid": "4420" 00:16:43.239 }, 00:16:43.239 "peer_address": { 00:16:43.239 "trtype": "TCP", 00:16:43.239 "adrfam": "IPv4", 00:16:43.239 "traddr": "10.0.0.1", 00:16:43.239 "trsvcid": "58078" 00:16:43.239 }, 00:16:43.239 "auth": { 00:16:43.239 "state": "completed", 00:16:43.239 "digest": "sha512", 00:16:43.239 "dhgroup": "ffdhe4096" 00:16:43.239 } 00:16:43.239 } 00:16:43.239 ]' 00:16:43.239 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.497 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:43.497 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.497 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:43.497 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.497 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.497 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.497 07:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.755 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzBlYzBkNmMzN2RlYTJkNDgzMjUyYzU1ZDA0NDFjZTY5MjhlNmU4N2VhZmRhYTZjlv9xVQ==: --dhchap-ctrl-secret DHHC-1:03:ODkzZTg5ODQ0MzllMDlkOWZhNmM2ZDgwNjlkYWVmNGQ5ZjIzZWMxYmZiYmI0YmIxYjI5NGRjOWQyNjQzMDVlMmvFeQQ=: 00:16:43.755 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzBlYzBkNmMzN2RlYTJkNDgzMjUyYzU1ZDA0NDFjZTY5MjhlNmU4N2VhZmRhYTZjlv9xVQ==: --dhchap-ctrl-secret DHHC-1:03:ODkzZTg5ODQ0MzllMDlkOWZhNmM2ZDgwNjlkYWVmNGQ5ZjIzZWMxYmZiYmI0YmIxYjI5NGRjOWQyNjQzMDVlMmvFeQQ=: 00:16:44.323 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.323 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:44.323 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.323 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.323 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.323 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.323 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:44.323 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:44.581 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:16:44.581 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.581 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:44.581 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:44.581 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:44.581 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.581 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.581 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.581 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.581 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.581 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.581 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.581 07:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.839 00:16:44.839 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.839 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.839 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.839 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.839 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.839 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.839 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.098 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.098 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.098 { 00:16:45.098 "cntlid": 123, 00:16:45.098 "qid": 0, 00:16:45.098 "state": "enabled", 00:16:45.098 "thread": "nvmf_tgt_poll_group_000", 00:16:45.098 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:45.098 "listen_address": { 00:16:45.098 "trtype": "TCP", 00:16:45.098 "adrfam": "IPv4", 00:16:45.098 "traddr": "10.0.0.2", 00:16:45.098 "trsvcid": "4420" 00:16:45.098 }, 00:16:45.098 "peer_address": { 00:16:45.098 "trtype": "TCP", 00:16:45.098 "adrfam": "IPv4", 00:16:45.098 "traddr": "10.0.0.1", 00:16:45.098 "trsvcid": "58116" 00:16:45.098 }, 00:16:45.098 "auth": { 00:16:45.098 "state": "completed", 00:16:45.098 "digest": "sha512", 00:16:45.098 "dhgroup": "ffdhe4096" 00:16:45.098 } 00:16:45.098 } 00:16:45.098 ]' 00:16:45.098 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.098 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:45.098 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.098 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:45.098 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.098 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.098 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.098 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.357 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDBkMWVlYWU0ZjI0YTQ1MjFiMzdlZTIwNDBjYWQzYznoBVG8: --dhchap-ctrl-secret DHHC-1:02:OTBiYzU5ZDljZDkxY2Q0MDFjZDU4OGY0ZDBlMTY0YzU5YzM5NzkxNmVkMzkzNzk4tQX+Ug==: 00:16:45.357 07:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDBkMWVlYWU0ZjI0YTQ1MjFiMzdlZTIwNDBjYWQzYznoBVG8: --dhchap-ctrl-secret DHHC-1:02:OTBiYzU5ZDljZDkxY2Q0MDFjZDU4OGY0ZDBlMTY0YzU5YzM5NzkxNmVkMzkzNzk4tQX+Ug==: 00:16:45.922 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.922 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:45.922 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.922 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.922 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.922 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.922 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:45.922 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:46.181 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:16:46.181 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.181 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:46.181 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:46.181 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:46.181 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.181 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.181 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.181 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.181 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.181 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.181 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.181 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.439 00:16:46.439 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.439 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.439 07:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.698 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.698 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.698 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.698 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.698 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.698 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.698 { 00:16:46.698 "cntlid": 125, 00:16:46.698 "qid": 0, 00:16:46.698 "state": "enabled", 00:16:46.698 "thread": "nvmf_tgt_poll_group_000", 00:16:46.698 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:46.698 "listen_address": { 00:16:46.698 "trtype": "TCP", 00:16:46.698 "adrfam": "IPv4", 00:16:46.698 "traddr": "10.0.0.2", 00:16:46.698 "trsvcid": "4420" 00:16:46.698 }, 00:16:46.698 "peer_address": { 00:16:46.698 "trtype": "TCP", 00:16:46.698 "adrfam": "IPv4", 00:16:46.698 "traddr": "10.0.0.1", 00:16:46.698 "trsvcid": "58146" 00:16:46.698 }, 00:16:46.698 "auth": { 00:16:46.698 "state": "completed", 00:16:46.698 "digest": "sha512", 00:16:46.698 "dhgroup": "ffdhe4096" 00:16:46.698 } 00:16:46.698 } 00:16:46.698 ]' 00:16:46.698 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.698 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:46.698 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.698 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:46.698 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.698 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.698 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.698 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.956 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTU2NGU3ZmU2MzEwYzMzMjBiZTFkMTQ3YWE4NGRjMDFmYjI0MmQxNGIzNzY1NDJl8zg9sg==: --dhchap-ctrl-secret DHHC-1:01:MGZjZTY1ZTQ1MjYwODZkNmRlMjc0NjRiZGU4MzllNWGLuOVs: 00:16:46.956 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTU2NGU3ZmU2MzEwYzMzMjBiZTFkMTQ3YWE4NGRjMDFmYjI0MmQxNGIzNzY1NDJl8zg9sg==: --dhchap-ctrl-secret DHHC-1:01:MGZjZTY1ZTQ1MjYwODZkNmRlMjc0NjRiZGU4MzllNWGLuOVs: 00:16:47.525 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.525 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.525 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:47.525 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.525 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.525 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.525 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.525 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:47.525 07:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:47.790 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:16:47.790 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.790 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:47.790 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:47.790 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:47.790 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.790 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:47.790 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.790 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.790 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.790 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:47.790 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:47.791 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:48.049 00:16:48.049 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.049 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.049 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.307 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.307 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.307 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.307 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.307 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.307 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.307 { 00:16:48.307 "cntlid": 127, 00:16:48.307 "qid": 0, 00:16:48.307 "state": "enabled", 00:16:48.307 "thread": "nvmf_tgt_poll_group_000", 00:16:48.307 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:48.307 "listen_address": { 00:16:48.307 "trtype": "TCP", 00:16:48.307 "adrfam": "IPv4", 00:16:48.307 "traddr": "10.0.0.2", 00:16:48.307 "trsvcid": "4420" 00:16:48.307 }, 00:16:48.307 "peer_address": { 00:16:48.307 "trtype": "TCP", 00:16:48.307 "adrfam": "IPv4", 00:16:48.307 "traddr": "10.0.0.1", 00:16:48.307 "trsvcid": "58160" 00:16:48.307 }, 00:16:48.307 "auth": { 00:16:48.307 "state": "completed", 00:16:48.307 "digest": "sha512", 00:16:48.307 "dhgroup": "ffdhe4096" 00:16:48.307 } 00:16:48.307 } 00:16:48.307 ]' 00:16:48.307 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.307 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:48.307 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.307 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:48.307 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.307 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.307 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.307 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.565 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGM4MmQ3NGU2MzQ2YmQ2NWNlY2YyY2MxM2RlOTRmNzE1OTZlYzAxYzFiNzg2NDNiNjgwNjRkYzI0NmVhZjRkOULdqeA=: 00:16:48.565 07:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGM4MmQ3NGU2MzQ2YmQ2NWNlY2YyY2MxM2RlOTRmNzE1OTZlYzAxYzFiNzg2NDNiNjgwNjRkYzI0NmVhZjRkOULdqeA=: 00:16:49.132 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.132 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:49.132 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.132 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.132 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.132 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:49.132 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.132 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:49.132 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:49.390 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:16:49.390 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.390 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:49.390 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:49.390 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:49.390 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.390 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.390 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.390 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.390 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.390 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.390 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.391 07:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.648 00:16:49.649 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.649 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.649 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.907 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.907 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.907 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.907 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.907 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.907 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.907 { 00:16:49.907 "cntlid": 129, 00:16:49.907 "qid": 0, 00:16:49.907 "state": "enabled", 00:16:49.907 "thread": "nvmf_tgt_poll_group_000", 00:16:49.907 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:49.907 "listen_address": { 00:16:49.907 "trtype": "TCP", 00:16:49.907 "adrfam": "IPv4", 00:16:49.907 "traddr": "10.0.0.2", 00:16:49.907 "trsvcid": "4420" 00:16:49.907 }, 00:16:49.907 "peer_address": { 00:16:49.907 "trtype": "TCP", 00:16:49.907 "adrfam": "IPv4", 00:16:49.907 "traddr": "10.0.0.1", 00:16:49.907 "trsvcid": "59392" 00:16:49.907 }, 00:16:49.907 "auth": { 00:16:49.907 "state": "completed", 00:16:49.907 "digest": "sha512", 00:16:49.907 "dhgroup": "ffdhe6144" 00:16:49.907 } 00:16:49.907 } 00:16:49.907 ]' 00:16:49.907 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.907 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:49.907 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.166 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:50.167 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.167 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.167 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.167 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.425 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzBlYzBkNmMzN2RlYTJkNDgzMjUyYzU1ZDA0NDFjZTY5MjhlNmU4N2VhZmRhYTZjlv9xVQ==: --dhchap-ctrl-secret DHHC-1:03:ODkzZTg5ODQ0MzllMDlkOWZhNmM2ZDgwNjlkYWVmNGQ5ZjIzZWMxYmZiYmI0YmIxYjI5NGRjOWQyNjQzMDVlMmvFeQQ=: 00:16:50.425 07:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzBlYzBkNmMzN2RlYTJkNDgzMjUyYzU1ZDA0NDFjZTY5MjhlNmU4N2VhZmRhYTZjlv9xVQ==: --dhchap-ctrl-secret DHHC-1:03:ODkzZTg5ODQ0MzllMDlkOWZhNmM2ZDgwNjlkYWVmNGQ5ZjIzZWMxYmZiYmI0YmIxYjI5NGRjOWQyNjQzMDVlMmvFeQQ=: 00:16:50.992 07:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.992 07:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:50.992 07:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.992 07:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.992 07:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.993 07:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.993 07:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:50.993 07:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:50.993 07:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:16:50.993 07:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.993 07:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:50.993 07:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:50.993 07:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:50.993 07:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.993 07:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.993 07:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.993 07:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.993 07:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.993 07:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.993 07:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.993 07:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.561 00:16:51.561 07:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.561 07:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.561 07:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.561 07:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.561 07:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.561 07:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.561 07:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.561 07:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.561 07:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.561 { 00:16:51.561 "cntlid": 131, 00:16:51.561 "qid": 0, 00:16:51.561 "state": "enabled", 00:16:51.561 "thread": "nvmf_tgt_poll_group_000", 00:16:51.561 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:51.561 "listen_address": { 00:16:51.561 "trtype": "TCP", 00:16:51.561 "adrfam": "IPv4", 00:16:51.561 "traddr": "10.0.0.2", 00:16:51.561 "trsvcid": "4420" 00:16:51.561 }, 00:16:51.561 "peer_address": { 00:16:51.561 "trtype": "TCP", 00:16:51.561 "adrfam": "IPv4", 00:16:51.561 "traddr": "10.0.0.1", 00:16:51.561 "trsvcid": "59426" 00:16:51.561 }, 00:16:51.561 "auth": { 00:16:51.561 "state": "completed", 00:16:51.561 "digest": "sha512", 00:16:51.561 "dhgroup": "ffdhe6144" 00:16:51.561 } 00:16:51.561 } 00:16:51.561 ]' 00:16:51.561 07:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.819 07:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:51.819 07:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.819 07:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:51.819 07:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.819 07:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.819 07:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.819 07:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.078 07:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDBkMWVlYWU0ZjI0YTQ1MjFiMzdlZTIwNDBjYWQzYznoBVG8: --dhchap-ctrl-secret DHHC-1:02:OTBiYzU5ZDljZDkxY2Q0MDFjZDU4OGY0ZDBlMTY0YzU5YzM5NzkxNmVkMzkzNzk4tQX+Ug==: 00:16:52.078 07:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDBkMWVlYWU0ZjI0YTQ1MjFiMzdlZTIwNDBjYWQzYznoBVG8: --dhchap-ctrl-secret DHHC-1:02:OTBiYzU5ZDljZDkxY2Q0MDFjZDU4OGY0ZDBlMTY0YzU5YzM5NzkxNmVkMzkzNzk4tQX+Ug==: 00:16:52.649 07:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.649 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:52.649 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.649 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.649 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.649 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.649 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:52.649 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:52.909 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:16:52.909 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.909 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:52.909 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:52.909 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:52.909 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.909 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.909 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.909 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.909 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.909 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.909 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.909 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.168 00:16:53.168 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.168 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.168 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.427 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.427 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.427 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.427 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.427 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.427 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.427 { 00:16:53.427 "cntlid": 133, 00:16:53.427 "qid": 0, 00:16:53.427 "state": "enabled", 00:16:53.427 "thread": "nvmf_tgt_poll_group_000", 00:16:53.427 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:53.427 "listen_address": { 00:16:53.427 "trtype": "TCP", 00:16:53.427 "adrfam": "IPv4", 00:16:53.427 "traddr": "10.0.0.2", 00:16:53.427 "trsvcid": "4420" 00:16:53.427 }, 00:16:53.427 "peer_address": { 00:16:53.427 "trtype": "TCP", 00:16:53.427 "adrfam": "IPv4", 00:16:53.427 "traddr": "10.0.0.1", 00:16:53.427 "trsvcid": "59458" 00:16:53.427 }, 00:16:53.427 "auth": { 00:16:53.427 "state": "completed", 00:16:53.427 "digest": "sha512", 00:16:53.427 "dhgroup": "ffdhe6144" 00:16:53.427 } 00:16:53.427 } 00:16:53.427 ]' 00:16:53.427 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.427 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:53.427 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.427 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:53.427 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.427 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.427 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.427 07:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.686 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTU2NGU3ZmU2MzEwYzMzMjBiZTFkMTQ3YWE4NGRjMDFmYjI0MmQxNGIzNzY1NDJl8zg9sg==: --dhchap-ctrl-secret DHHC-1:01:MGZjZTY1ZTQ1MjYwODZkNmRlMjc0NjRiZGU4MzllNWGLuOVs: 00:16:53.686 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTU2NGU3ZmU2MzEwYzMzMjBiZTFkMTQ3YWE4NGRjMDFmYjI0MmQxNGIzNzY1NDJl8zg9sg==: --dhchap-ctrl-secret DHHC-1:01:MGZjZTY1ZTQ1MjYwODZkNmRlMjc0NjRiZGU4MzllNWGLuOVs: 00:16:54.254 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.254 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:54.254 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.254 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.254 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.254 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.254 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:54.254 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:54.513 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:16:54.513 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.513 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:54.513 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:54.513 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:54.513 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.513 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:54.513 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.513 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.513 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.513 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:54.513 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:54.513 07:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:54.773 00:16:54.773 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.773 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.773 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.032 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.032 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.032 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.032 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.032 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.032 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.032 { 00:16:55.032 "cntlid": 135, 00:16:55.032 "qid": 0, 00:16:55.032 "state": "enabled", 00:16:55.032 "thread": "nvmf_tgt_poll_group_000", 00:16:55.032 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:55.032 "listen_address": { 00:16:55.032 "trtype": "TCP", 00:16:55.032 "adrfam": "IPv4", 00:16:55.032 "traddr": "10.0.0.2", 00:16:55.032 "trsvcid": "4420" 00:16:55.032 }, 00:16:55.032 "peer_address": { 00:16:55.032 "trtype": "TCP", 00:16:55.032 "adrfam": "IPv4", 00:16:55.032 "traddr": "10.0.0.1", 00:16:55.032 "trsvcid": "59490" 00:16:55.032 }, 00:16:55.032 "auth": { 00:16:55.032 "state": "completed", 00:16:55.032 "digest": "sha512", 00:16:55.032 "dhgroup": "ffdhe6144" 00:16:55.032 } 00:16:55.032 } 00:16:55.032 ]' 00:16:55.032 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.032 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:55.032 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.291 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:55.291 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.291 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.291 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.292 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.551 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGM4MmQ3NGU2MzQ2YmQ2NWNlY2YyY2MxM2RlOTRmNzE1OTZlYzAxYzFiNzg2NDNiNjgwNjRkYzI0NmVhZjRkOULdqeA=: 00:16:55.551 07:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGM4MmQ3NGU2MzQ2YmQ2NWNlY2YyY2MxM2RlOTRmNzE1OTZlYzAxYzFiNzg2NDNiNjgwNjRkYzI0NmVhZjRkOULdqeA=: 00:16:56.119 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.119 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:56.119 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.119 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.119 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.119 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:56.119 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.119 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:56.119 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:56.119 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:16:56.119 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.119 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:56.378 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:56.378 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:56.378 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.378 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.378 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.378 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.378 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.378 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.378 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.378 07:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.637 00:16:56.637 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.637 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.637 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.896 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.896 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.896 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.896 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.896 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.896 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.896 { 00:16:56.896 "cntlid": 137, 00:16:56.896 "qid": 0, 00:16:56.896 "state": "enabled", 00:16:56.896 "thread": "nvmf_tgt_poll_group_000", 00:16:56.896 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:56.896 "listen_address": { 00:16:56.896 "trtype": "TCP", 00:16:56.896 "adrfam": "IPv4", 00:16:56.896 "traddr": "10.0.0.2", 00:16:56.896 "trsvcid": "4420" 00:16:56.896 }, 00:16:56.896 "peer_address": { 00:16:56.896 "trtype": "TCP", 00:16:56.896 "adrfam": "IPv4", 00:16:56.896 "traddr": "10.0.0.1", 00:16:56.896 "trsvcid": "59522" 00:16:56.896 }, 00:16:56.896 "auth": { 00:16:56.896 "state": "completed", 00:16:56.896 "digest": "sha512", 00:16:56.896 "dhgroup": "ffdhe8192" 00:16:56.896 } 00:16:56.896 } 00:16:56.896 ]' 00:16:56.896 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.896 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:56.896 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.155 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:57.155 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.155 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.155 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.155 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.414 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzBlYzBkNmMzN2RlYTJkNDgzMjUyYzU1ZDA0NDFjZTY5MjhlNmU4N2VhZmRhYTZjlv9xVQ==: --dhchap-ctrl-secret DHHC-1:03:ODkzZTg5ODQ0MzllMDlkOWZhNmM2ZDgwNjlkYWVmNGQ5ZjIzZWMxYmZiYmI0YmIxYjI5NGRjOWQyNjQzMDVlMmvFeQQ=: 00:16:57.414 07:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzBlYzBkNmMzN2RlYTJkNDgzMjUyYzU1ZDA0NDFjZTY5MjhlNmU4N2VhZmRhYTZjlv9xVQ==: --dhchap-ctrl-secret DHHC-1:03:ODkzZTg5ODQ0MzllMDlkOWZhNmM2ZDgwNjlkYWVmNGQ5ZjIzZWMxYmZiYmI0YmIxYjI5NGRjOWQyNjQzMDVlMmvFeQQ=: 00:16:57.984 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.984 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:57.984 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.984 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.984 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.984 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.984 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:57.984 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:57.984 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:16:57.984 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.984 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:57.984 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:57.984 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:57.984 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.984 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.984 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.984 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.984 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.984 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.984 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.984 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.552 00:16:58.552 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.552 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.552 07:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.812 07:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.812 07:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.812 07:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.812 07:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.812 07:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.812 07:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.812 { 00:16:58.812 "cntlid": 139, 00:16:58.812 "qid": 0, 00:16:58.812 "state": "enabled", 00:16:58.812 "thread": "nvmf_tgt_poll_group_000", 00:16:58.812 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:58.812 "listen_address": { 00:16:58.812 "trtype": "TCP", 00:16:58.812 "adrfam": "IPv4", 00:16:58.812 "traddr": "10.0.0.2", 00:16:58.812 "trsvcid": "4420" 00:16:58.812 }, 00:16:58.812 "peer_address": { 00:16:58.812 "trtype": "TCP", 00:16:58.812 "adrfam": "IPv4", 00:16:58.812 "traddr": "10.0.0.1", 00:16:58.812 "trsvcid": "59550" 00:16:58.812 }, 00:16:58.812 "auth": { 00:16:58.812 "state": "completed", 00:16:58.812 "digest": "sha512", 00:16:58.812 "dhgroup": "ffdhe8192" 00:16:58.812 } 00:16:58.812 } 00:16:58.812 ]' 00:16:58.812 07:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.812 07:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:58.812 07:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.812 07:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:58.812 07:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.812 07:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.812 07:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.812 07:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.071 07:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDBkMWVlYWU0ZjI0YTQ1MjFiMzdlZTIwNDBjYWQzYznoBVG8: --dhchap-ctrl-secret DHHC-1:02:OTBiYzU5ZDljZDkxY2Q0MDFjZDU4OGY0ZDBlMTY0YzU5YzM5NzkxNmVkMzkzNzk4tQX+Ug==: 00:16:59.071 07:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MDBkMWVlYWU0ZjI0YTQ1MjFiMzdlZTIwNDBjYWQzYznoBVG8: --dhchap-ctrl-secret DHHC-1:02:OTBiYzU5ZDljZDkxY2Q0MDFjZDU4OGY0ZDBlMTY0YzU5YzM5NzkxNmVkMzkzNzk4tQX+Ug==: 00:16:59.638 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.638 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:59.638 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.638 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.638 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.638 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.638 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:59.638 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:59.910 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:16:59.910 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.910 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:59.910 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:59.910 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:59.910 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.910 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.910 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.910 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.910 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.910 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.910 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.910 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.572 00:17:00.572 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.572 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.572 07:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.572 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.572 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.572 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.572 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.572 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.572 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.572 { 00:17:00.572 "cntlid": 141, 00:17:00.572 "qid": 0, 00:17:00.572 "state": "enabled", 00:17:00.572 "thread": "nvmf_tgt_poll_group_000", 00:17:00.572 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:00.572 "listen_address": { 00:17:00.572 "trtype": "TCP", 00:17:00.572 "adrfam": "IPv4", 00:17:00.572 "traddr": "10.0.0.2", 00:17:00.572 "trsvcid": "4420" 00:17:00.572 }, 00:17:00.572 "peer_address": { 00:17:00.572 "trtype": "TCP", 00:17:00.572 "adrfam": "IPv4", 00:17:00.572 "traddr": "10.0.0.1", 00:17:00.572 "trsvcid": "37240" 00:17:00.572 }, 00:17:00.572 "auth": { 00:17:00.572 "state": "completed", 00:17:00.572 "digest": "sha512", 00:17:00.572 "dhgroup": "ffdhe8192" 00:17:00.572 } 00:17:00.572 } 00:17:00.572 ]' 00:17:00.572 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.572 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:00.572 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.572 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:00.572 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.831 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.831 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.831 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.831 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTU2NGU3ZmU2MzEwYzMzMjBiZTFkMTQ3YWE4NGRjMDFmYjI0MmQxNGIzNzY1NDJl8zg9sg==: --dhchap-ctrl-secret DHHC-1:01:MGZjZTY1ZTQ1MjYwODZkNmRlMjc0NjRiZGU4MzllNWGLuOVs: 00:17:00.831 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTU2NGU3ZmU2MzEwYzMzMjBiZTFkMTQ3YWE4NGRjMDFmYjI0MmQxNGIzNzY1NDJl8zg9sg==: --dhchap-ctrl-secret DHHC-1:01:MGZjZTY1ZTQ1MjYwODZkNmRlMjc0NjRiZGU4MzllNWGLuOVs: 00:17:01.398 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.398 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.398 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:01.398 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.398 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.657 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.657 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.657 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:01.657 07:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:01.657 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:01.658 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.658 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:01.658 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:01.658 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:01.658 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.658 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:01.658 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.658 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.658 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.658 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:01.658 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.658 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:02.225 00:17:02.225 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.225 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.225 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.484 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.484 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.484 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.484 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.484 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.484 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.484 { 00:17:02.484 "cntlid": 143, 00:17:02.484 "qid": 0, 00:17:02.484 "state": "enabled", 00:17:02.484 "thread": "nvmf_tgt_poll_group_000", 00:17:02.484 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:02.484 "listen_address": { 00:17:02.484 "trtype": "TCP", 00:17:02.484 "adrfam": "IPv4", 00:17:02.484 "traddr": "10.0.0.2", 00:17:02.484 "trsvcid": "4420" 00:17:02.484 }, 00:17:02.484 "peer_address": { 00:17:02.484 "trtype": "TCP", 00:17:02.484 "adrfam": "IPv4", 00:17:02.485 "traddr": "10.0.0.1", 00:17:02.485 "trsvcid": "37282" 00:17:02.485 }, 00:17:02.485 "auth": { 00:17:02.485 "state": "completed", 00:17:02.485 "digest": "sha512", 00:17:02.485 "dhgroup": "ffdhe8192" 00:17:02.485 } 00:17:02.485 } 00:17:02.485 ]' 00:17:02.485 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.485 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:02.485 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.485 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:02.485 07:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.485 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.485 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.485 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.744 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGM4MmQ3NGU2MzQ2YmQ2NWNlY2YyY2MxM2RlOTRmNzE1OTZlYzAxYzFiNzg2NDNiNjgwNjRkYzI0NmVhZjRkOULdqeA=: 00:17:02.744 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGM4MmQ3NGU2MzQ2YmQ2NWNlY2YyY2MxM2RlOTRmNzE1OTZlYzAxYzFiNzg2NDNiNjgwNjRkYzI0NmVhZjRkOULdqeA=: 00:17:03.311 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.311 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:03.311 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.311 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.311 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.311 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:03.311 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:03.311 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:03.311 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:03.311 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:03.311 07:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:03.570 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:03.570 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.570 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:03.570 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:03.570 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:03.570 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.570 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.570 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.570 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.570 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.570 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.570 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.570 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.137 00:17:04.137 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.137 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.137 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.396 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.396 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.396 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.396 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.396 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.396 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.396 { 00:17:04.396 "cntlid": 145, 00:17:04.396 "qid": 0, 00:17:04.396 "state": "enabled", 00:17:04.396 "thread": "nvmf_tgt_poll_group_000", 00:17:04.396 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:04.396 "listen_address": { 00:17:04.396 "trtype": "TCP", 00:17:04.396 "adrfam": "IPv4", 00:17:04.396 "traddr": "10.0.0.2", 00:17:04.396 "trsvcid": "4420" 00:17:04.396 }, 00:17:04.396 "peer_address": { 00:17:04.396 "trtype": "TCP", 00:17:04.396 "adrfam": "IPv4", 00:17:04.396 "traddr": "10.0.0.1", 00:17:04.396 "trsvcid": "37314" 00:17:04.396 }, 00:17:04.396 "auth": { 00:17:04.396 "state": "completed", 00:17:04.396 "digest": "sha512", 00:17:04.396 "dhgroup": "ffdhe8192" 00:17:04.396 } 00:17:04.396 } 00:17:04.396 ]' 00:17:04.396 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.396 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:04.396 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.396 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:04.396 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.396 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.396 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.396 07:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.655 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YzBlYzBkNmMzN2RlYTJkNDgzMjUyYzU1ZDA0NDFjZTY5MjhlNmU4N2VhZmRhYTZjlv9xVQ==: --dhchap-ctrl-secret DHHC-1:03:ODkzZTg5ODQ0MzllMDlkOWZhNmM2ZDgwNjlkYWVmNGQ5ZjIzZWMxYmZiYmI0YmIxYjI5NGRjOWQyNjQzMDVlMmvFeQQ=: 00:17:04.655 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YzBlYzBkNmMzN2RlYTJkNDgzMjUyYzU1ZDA0NDFjZTY5MjhlNmU4N2VhZmRhYTZjlv9xVQ==: --dhchap-ctrl-secret DHHC-1:03:ODkzZTg5ODQ0MzllMDlkOWZhNmM2ZDgwNjlkYWVmNGQ5ZjIzZWMxYmZiYmI0YmIxYjI5NGRjOWQyNjQzMDVlMmvFeQQ=: 00:17:05.220 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.221 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:05.221 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.221 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.221 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.221 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:05.221 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.221 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.221 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.221 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:05.221 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:05.221 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:05.221 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:05.221 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:05.221 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:05.221 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:05.221 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:05.221 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:05.221 07:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:05.786 request: 00:17:05.786 { 00:17:05.786 "name": "nvme0", 00:17:05.786 "trtype": "tcp", 00:17:05.786 "traddr": "10.0.0.2", 00:17:05.786 "adrfam": "ipv4", 00:17:05.786 "trsvcid": "4420", 00:17:05.786 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:05.786 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:05.786 "prchk_reftag": false, 00:17:05.786 "prchk_guard": false, 00:17:05.786 "hdgst": false, 00:17:05.786 "ddgst": false, 00:17:05.786 "dhchap_key": "key2", 00:17:05.786 "allow_unrecognized_csi": false, 00:17:05.786 "method": "bdev_nvme_attach_controller", 00:17:05.786 "req_id": 1 00:17:05.786 } 00:17:05.786 Got JSON-RPC error response 00:17:05.786 response: 00:17:05.786 { 00:17:05.786 "code": -5, 00:17:05.786 "message": "Input/output error" 00:17:05.786 } 00:17:05.786 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:05.786 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:05.786 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:05.786 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:05.786 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:05.786 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.786 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.786 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.786 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.786 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.786 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.786 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.786 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:05.786 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:05.786 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:05.786 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:05.786 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:05.786 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:05.786 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:05.787 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:05.787 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:05.787 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:06.068 request: 00:17:06.068 { 00:17:06.068 "name": "nvme0", 00:17:06.068 "trtype": "tcp", 00:17:06.068 "traddr": "10.0.0.2", 00:17:06.068 "adrfam": "ipv4", 00:17:06.068 "trsvcid": "4420", 00:17:06.068 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:06.069 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:06.069 "prchk_reftag": false, 00:17:06.069 "prchk_guard": false, 00:17:06.069 "hdgst": false, 00:17:06.069 "ddgst": false, 00:17:06.069 "dhchap_key": "key1", 00:17:06.069 "dhchap_ctrlr_key": "ckey2", 00:17:06.069 "allow_unrecognized_csi": false, 00:17:06.069 "method": "bdev_nvme_attach_controller", 00:17:06.069 "req_id": 1 00:17:06.069 } 00:17:06.069 Got JSON-RPC error response 00:17:06.069 response: 00:17:06.069 { 00:17:06.069 "code": -5, 00:17:06.069 "message": "Input/output error" 00:17:06.069 } 00:17:06.326 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:06.326 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:06.326 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:06.326 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:06.326 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:06.326 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.326 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.326 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.326 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:06.326 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.326 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.326 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.326 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.326 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:06.326 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.326 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:06.326 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:06.326 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:06.326 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:06.326 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.326 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.326 07:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.584 request: 00:17:06.584 { 00:17:06.584 "name": "nvme0", 00:17:06.584 "trtype": "tcp", 00:17:06.584 "traddr": "10.0.0.2", 00:17:06.584 "adrfam": "ipv4", 00:17:06.584 "trsvcid": "4420", 00:17:06.584 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:06.584 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:06.584 "prchk_reftag": false, 00:17:06.584 "prchk_guard": false, 00:17:06.584 "hdgst": false, 00:17:06.584 "ddgst": false, 00:17:06.584 "dhchap_key": "key1", 00:17:06.584 "dhchap_ctrlr_key": "ckey1", 00:17:06.584 "allow_unrecognized_csi": false, 00:17:06.584 "method": "bdev_nvme_attach_controller", 00:17:06.584 "req_id": 1 00:17:06.584 } 00:17:06.584 Got JSON-RPC error response 00:17:06.584 response: 00:17:06.584 { 00:17:06.584 "code": -5, 00:17:06.584 "message": "Input/output error" 00:17:06.584 } 00:17:06.584 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:06.584 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:06.584 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:06.584 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:06.584 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:06.584 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.584 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.584 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.584 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1170392 00:17:06.584 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 1170392 ']' 00:17:06.584 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 1170392 00:17:06.584 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:17:06.584 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:06.584 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1170392 00:17:06.843 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:06.843 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:06.843 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1170392' 00:17:06.843 killing process with pid 1170392 00:17:06.843 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 1170392 00:17:06.843 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 1170392 00:17:06.843 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:06.843 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:06.843 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:06.843 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.843 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1192696 00:17:06.843 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:06.843 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1192696 00:17:06.843 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 1192696 ']' 00:17:06.843 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.843 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:06.843 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.843 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:06.843 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.100 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:07.100 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:17:07.100 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:07.100 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:07.100 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.100 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:07.100 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:07.100 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1192696 00:17:07.100 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 1192696 ']' 00:17:07.100 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.100 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:07.100 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.100 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:07.100 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.358 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:07.358 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:17:07.358 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:07.358 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.358 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.358 null0 00:17:07.616 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.616 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:07.616 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.qV7 00:17:07.616 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.616 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.616 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.616 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.8a4 ]] 00:17:07.617 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8a4 00:17:07.617 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.617 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.617 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.617 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:07.617 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.79H 00:17:07.617 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.617 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.617 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.617 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.XYJ ]] 00:17:07.617 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.XYJ 00:17:07.617 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.617 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.617 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.617 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:07.617 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.owB 00:17:07.617 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.617 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.617 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.617 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.G6P ]] 00:17:07.617 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.G6P 00:17:07.617 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.617 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.617 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.617 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:07.617 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.HlU 00:17:07.617 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.617 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.617 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.617 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:07.617 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:07.617 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.617 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:07.617 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:07.617 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:07.617 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.617 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:07.617 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.617 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.617 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.617 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:07.617 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:07.617 07:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.552 nvme0n1 00:17:08.552 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.552 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.552 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.552 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.552 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.552 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.552 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.552 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.552 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:08.552 { 00:17:08.552 "cntlid": 1, 00:17:08.552 "qid": 0, 00:17:08.552 "state": "enabled", 00:17:08.552 "thread": "nvmf_tgt_poll_group_000", 00:17:08.552 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:08.552 "listen_address": { 00:17:08.552 "trtype": "TCP", 00:17:08.552 "adrfam": "IPv4", 00:17:08.552 "traddr": "10.0.0.2", 00:17:08.552 "trsvcid": "4420" 00:17:08.552 }, 00:17:08.552 "peer_address": { 00:17:08.552 "trtype": "TCP", 00:17:08.552 "adrfam": "IPv4", 00:17:08.552 "traddr": "10.0.0.1", 00:17:08.552 "trsvcid": "37368" 00:17:08.552 }, 00:17:08.552 "auth": { 00:17:08.552 "state": "completed", 00:17:08.552 "digest": "sha512", 00:17:08.552 "dhgroup": "ffdhe8192" 00:17:08.552 } 00:17:08.552 } 00:17:08.552 ]' 00:17:08.552 07:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.552 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:08.552 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.552 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:08.552 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.552 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.552 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.552 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.810 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OGM4MmQ3NGU2MzQ2YmQ2NWNlY2YyY2MxM2RlOTRmNzE1OTZlYzAxYzFiNzg2NDNiNjgwNjRkYzI0NmVhZjRkOULdqeA=: 00:17:08.810 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:OGM4MmQ3NGU2MzQ2YmQ2NWNlY2YyY2MxM2RlOTRmNzE1OTZlYzAxYzFiNzg2NDNiNjgwNjRkYzI0NmVhZjRkOULdqeA=: 00:17:09.377 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.377 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.377 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:09.377 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.377 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.377 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.377 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:09.377 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.377 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.377 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.377 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:09.377 07:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:09.637 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:09.637 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:09.637 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:09.637 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:09.637 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:09.637 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:09.637 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:09.637 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:09.637 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:09.637 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:09.897 request: 00:17:09.897 { 00:17:09.897 "name": "nvme0", 00:17:09.897 "trtype": "tcp", 00:17:09.897 "traddr": "10.0.0.2", 00:17:09.897 "adrfam": "ipv4", 00:17:09.897 "trsvcid": "4420", 00:17:09.897 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:09.897 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:09.897 "prchk_reftag": false, 00:17:09.897 "prchk_guard": false, 00:17:09.897 "hdgst": false, 00:17:09.897 "ddgst": false, 00:17:09.897 "dhchap_key": "key3", 00:17:09.897 "allow_unrecognized_csi": false, 00:17:09.897 "method": "bdev_nvme_attach_controller", 00:17:09.897 "req_id": 1 00:17:09.897 } 00:17:09.897 Got JSON-RPC error response 00:17:09.897 response: 00:17:09.897 { 00:17:09.897 "code": -5, 00:17:09.897 "message": "Input/output error" 00:17:09.897 } 00:17:09.897 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:09.897 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:09.897 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:09.897 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:09.897 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:09.897 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:09.897 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:09.897 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:10.157 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:10.157 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:10.157 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:10.157 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:10.157 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:10.157 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:10.157 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:10.157 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:10.157 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:10.157 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:10.157 request: 00:17:10.157 { 00:17:10.157 "name": "nvme0", 00:17:10.157 "trtype": "tcp", 00:17:10.157 "traddr": "10.0.0.2", 00:17:10.157 "adrfam": "ipv4", 00:17:10.157 "trsvcid": "4420", 00:17:10.157 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:10.157 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:10.157 "prchk_reftag": false, 00:17:10.157 "prchk_guard": false, 00:17:10.157 "hdgst": false, 00:17:10.157 "ddgst": false, 00:17:10.157 "dhchap_key": "key3", 00:17:10.157 "allow_unrecognized_csi": false, 00:17:10.157 "method": "bdev_nvme_attach_controller", 00:17:10.157 "req_id": 1 00:17:10.157 } 00:17:10.157 Got JSON-RPC error response 00:17:10.157 response: 00:17:10.157 { 00:17:10.157 "code": -5, 00:17:10.157 "message": "Input/output error" 00:17:10.157 } 00:17:10.416 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:10.416 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:10.416 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:10.416 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:10.416 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:10.416 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:10.416 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:10.416 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:10.416 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:10.416 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:10.416 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:10.416 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.416 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.416 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.416 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:10.416 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.416 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.416 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.416 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:10.416 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:10.416 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:10.416 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:10.416 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:10.416 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:10.416 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:10.416 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:10.416 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:10.416 07:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:10.984 request: 00:17:10.984 { 00:17:10.984 "name": "nvme0", 00:17:10.984 "trtype": "tcp", 00:17:10.984 "traddr": "10.0.0.2", 00:17:10.984 "adrfam": "ipv4", 00:17:10.984 "trsvcid": "4420", 00:17:10.984 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:10.984 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:10.984 "prchk_reftag": false, 00:17:10.984 "prchk_guard": false, 00:17:10.984 "hdgst": false, 00:17:10.984 "ddgst": false, 00:17:10.984 "dhchap_key": "key0", 00:17:10.984 "dhchap_ctrlr_key": "key1", 00:17:10.984 "allow_unrecognized_csi": false, 00:17:10.984 "method": "bdev_nvme_attach_controller", 00:17:10.984 "req_id": 1 00:17:10.984 } 00:17:10.984 Got JSON-RPC error response 00:17:10.984 response: 00:17:10.984 { 00:17:10.984 "code": -5, 00:17:10.984 "message": "Input/output error" 00:17:10.984 } 00:17:10.984 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:10.984 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:10.984 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:10.984 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:10.984 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:10.984 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:10.984 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:10.984 nvme0n1 00:17:11.243 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:11.243 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.243 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:11.243 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.243 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.243 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.501 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:11.501 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.501 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.501 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.501 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:11.501 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:11.501 07:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:12.437 nvme0n1 00:17:12.437 07:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:12.437 07:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:12.437 07:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.437 07:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.437 07:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:12.437 07:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.437 07:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.437 07:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.437 07:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:12.437 07:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:12.437 07:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.695 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.695 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YTU2NGU3ZmU2MzEwYzMzMjBiZTFkMTQ3YWE4NGRjMDFmYjI0MmQxNGIzNzY1NDJl8zg9sg==: --dhchap-ctrl-secret DHHC-1:03:OGM4MmQ3NGU2MzQ2YmQ2NWNlY2YyY2MxM2RlOTRmNzE1OTZlYzAxYzFiNzg2NDNiNjgwNjRkYzI0NmVhZjRkOULdqeA=: 00:17:12.695 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTU2NGU3ZmU2MzEwYzMzMjBiZTFkMTQ3YWE4NGRjMDFmYjI0MmQxNGIzNzY1NDJl8zg9sg==: --dhchap-ctrl-secret DHHC-1:03:OGM4MmQ3NGU2MzQ2YmQ2NWNlY2YyY2MxM2RlOTRmNzE1OTZlYzAxYzFiNzg2NDNiNjgwNjRkYzI0NmVhZjRkOULdqeA=: 00:17:13.262 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:13.262 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:13.262 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:13.262 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:13.262 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:13.262 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:13.262 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:13.262 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.262 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.521 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:13.521 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:13.521 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:13.521 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:17:13.521 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:13.521 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:17:13.521 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:13.521 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:13.521 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:13.521 07:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:13.780 request: 00:17:13.780 { 00:17:13.780 "name": "nvme0", 00:17:13.780 "trtype": "tcp", 00:17:13.780 "traddr": "10.0.0.2", 00:17:13.780 "adrfam": "ipv4", 00:17:13.780 "trsvcid": "4420", 00:17:13.780 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:13.780 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:13.780 "prchk_reftag": false, 00:17:13.780 "prchk_guard": false, 00:17:13.780 "hdgst": false, 00:17:13.780 "ddgst": false, 00:17:13.780 "dhchap_key": "key1", 00:17:13.780 "allow_unrecognized_csi": false, 00:17:13.780 "method": "bdev_nvme_attach_controller", 00:17:13.780 "req_id": 1 00:17:13.780 } 00:17:13.780 Got JSON-RPC error response 00:17:13.780 response: 00:17:13.780 { 00:17:13.780 "code": -5, 00:17:13.780 "message": "Input/output error" 00:17:13.780 } 00:17:14.042 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:14.042 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:14.042 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:14.042 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:14.042 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:14.042 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:14.042 07:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:14.610 nvme0n1 00:17:14.610 07:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:14.610 07:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:14.610 07:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.868 07:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.868 07:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.868 07:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.127 07:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:15.127 07:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.127 07:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.127 07:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.127 07:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:15.127 07:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:15.127 07:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:15.386 nvme0n1 00:17:15.386 07:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:15.386 07:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:15.386 07:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.644 07:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.644 07:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.644 07:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.903 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:15.903 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.903 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.903 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.903 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MDBkMWVlYWU0ZjI0YTQ1MjFiMzdlZTIwNDBjYWQzYznoBVG8: '' 2s 00:17:15.903 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:15.903 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:15.903 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MDBkMWVlYWU0ZjI0YTQ1MjFiMzdlZTIwNDBjYWQzYznoBVG8: 00:17:15.903 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:15.903 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:15.903 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:15.903 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MDBkMWVlYWU0ZjI0YTQ1MjFiMzdlZTIwNDBjYWQzYznoBVG8: ]] 00:17:15.903 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MDBkMWVlYWU0ZjI0YTQ1MjFiMzdlZTIwNDBjYWQzYznoBVG8: 00:17:15.903 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:15.903 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:15.903 07:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:17.807 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:17.807 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:17:17.807 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:17:17.807 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:17:17.807 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:17:17.807 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:17:17.807 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:17:17.807 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:17.807 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.807 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.807 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.807 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YTU2NGU3ZmU2MzEwYzMzMjBiZTFkMTQ3YWE4NGRjMDFmYjI0MmQxNGIzNzY1NDJl8zg9sg==: 2s 00:17:17.807 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:17.807 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:17.807 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:17.807 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YTU2NGU3ZmU2MzEwYzMzMjBiZTFkMTQ3YWE4NGRjMDFmYjI0MmQxNGIzNzY1NDJl8zg9sg==: 00:17:17.807 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:17.807 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:17.807 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:17.807 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YTU2NGU3ZmU2MzEwYzMzMjBiZTFkMTQ3YWE4NGRjMDFmYjI0MmQxNGIzNzY1NDJl8zg9sg==: ]] 00:17:17.807 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YTU2NGU3ZmU2MzEwYzMzMjBiZTFkMTQ3YWE4NGRjMDFmYjI0MmQxNGIzNzY1NDJl8zg9sg==: 00:17:17.807 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:17.807 07:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:20.340 07:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:20.340 07:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:17:20.340 07:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:17:20.340 07:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:17:20.340 07:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:17:20.340 07:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:17:20.340 07:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:17:20.340 07:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.340 07:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:20.340 07:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.340 07:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.340 07:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.340 07:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:20.340 07:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:20.340 07:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:20.599 nvme0n1 00:17:20.599 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:20.599 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.599 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.599 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.599 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:20.599 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:21.167 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:21.167 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:21.167 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.426 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.426 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:21.426 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.426 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.426 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.427 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:21.427 07:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:21.685 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:21.686 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:21.686 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.686 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.686 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:21.686 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.686 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.686 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.686 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:21.686 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:21.686 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:21.686 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:17:21.686 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:21.686 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:17:21.686 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:21.686 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:21.686 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:22.254 request: 00:17:22.254 { 00:17:22.254 "name": "nvme0", 00:17:22.254 "dhchap_key": "key1", 00:17:22.254 "dhchap_ctrlr_key": "key3", 00:17:22.254 "method": "bdev_nvme_set_keys", 00:17:22.254 "req_id": 1 00:17:22.254 } 00:17:22.254 Got JSON-RPC error response 00:17:22.254 response: 00:17:22.254 { 00:17:22.254 "code": -13, 00:17:22.254 "message": "Permission denied" 00:17:22.254 } 00:17:22.254 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:22.254 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:22.254 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:22.254 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:22.254 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:22.254 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.254 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:22.513 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:22.513 07:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:23.450 07:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:23.450 07:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:23.450 07:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.709 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:23.710 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:23.710 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.710 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.710 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.710 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:23.710 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:23.710 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:24.277 nvme0n1 00:17:24.277 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:24.277 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.277 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.277 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.277 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:24.277 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:24.277 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:24.277 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:17:24.536 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:24.536 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:17:24.536 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:24.536 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:24.536 07:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:24.794 request: 00:17:24.795 { 00:17:24.795 "name": "nvme0", 00:17:24.795 "dhchap_key": "key2", 00:17:24.795 "dhchap_ctrlr_key": "key0", 00:17:24.795 "method": "bdev_nvme_set_keys", 00:17:24.795 "req_id": 1 00:17:24.795 } 00:17:24.795 Got JSON-RPC error response 00:17:24.795 response: 00:17:24.795 { 00:17:24.795 "code": -13, 00:17:24.795 "message": "Permission denied" 00:17:24.795 } 00:17:24.795 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:24.795 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:24.795 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:24.795 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:24.795 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:24.795 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:24.795 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.053 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:25.053 07:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:25.989 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:25.989 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:25.989 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.248 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:26.248 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:26.248 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:26.248 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1170458 00:17:26.248 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 1170458 ']' 00:17:26.248 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 1170458 00:17:26.248 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:17:26.248 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:26.248 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1170458 00:17:26.248 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:26.248 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:26.248 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1170458' 00:17:26.248 killing process with pid 1170458 00:17:26.248 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 1170458 00:17:26.248 07:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 1170458 00:17:26.817 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:26.817 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:26.817 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:26.817 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:26.817 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:26.817 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:26.817 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:26.817 rmmod nvme_tcp 00:17:26.817 rmmod nvme_fabrics 00:17:26.817 rmmod nvme_keyring 00:17:26.817 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:26.817 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:26.817 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:26.817 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1192696 ']' 00:17:26.817 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1192696 00:17:26.817 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 1192696 ']' 00:17:26.817 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 1192696 00:17:26.817 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:17:26.817 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:26.817 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1192696 00:17:26.817 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:26.817 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:26.817 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1192696' 00:17:26.817 killing process with pid 1192696 00:17:26.817 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 1192696 00:17:26.817 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 1192696 00:17:26.817 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:26.817 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:26.817 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:26.817 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:17:26.817 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:17:26.817 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:26.817 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:26.817 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:26.817 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:26.817 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.817 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:26.817 07:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:29.355 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:29.355 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.qV7 /tmp/spdk.key-sha256.79H /tmp/spdk.key-sha384.owB /tmp/spdk.key-sha512.HlU /tmp/spdk.key-sha512.8a4 /tmp/spdk.key-sha384.XYJ /tmp/spdk.key-sha256.G6P '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:29.355 00:17:29.355 real 2m34.005s 00:17:29.355 user 5m55.351s 00:17:29.355 sys 0m24.347s 00:17:29.355 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:29.355 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.355 ************************************ 00:17:29.355 END TEST nvmf_auth_target 00:17:29.355 ************************************ 00:17:29.355 07:13:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:29.355 07:13:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:29.355 07:13:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:17:29.355 07:13:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:29.355 07:13:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:29.355 ************************************ 00:17:29.355 START TEST nvmf_bdevio_no_huge 00:17:29.355 ************************************ 00:17:29.355 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:29.355 * Looking for test storage... 00:17:29.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:29.355 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:29.355 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:17:29.355 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:29.355 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:29.355 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:29.355 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:29.355 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:29.355 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:17:29.355 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:17:29.355 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:17:29.355 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:17:29.355 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:17:29.355 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:17:29.355 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:17:29.355 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:29.355 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:17:29.355 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:17:29.355 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:29.355 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:29.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.356 --rc genhtml_branch_coverage=1 00:17:29.356 --rc genhtml_function_coverage=1 00:17:29.356 --rc genhtml_legend=1 00:17:29.356 --rc geninfo_all_blocks=1 00:17:29.356 --rc geninfo_unexecuted_blocks=1 00:17:29.356 00:17:29.356 ' 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:29.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.356 --rc genhtml_branch_coverage=1 00:17:29.356 --rc genhtml_function_coverage=1 00:17:29.356 --rc genhtml_legend=1 00:17:29.356 --rc geninfo_all_blocks=1 00:17:29.356 --rc geninfo_unexecuted_blocks=1 00:17:29.356 00:17:29.356 ' 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:29.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.356 --rc genhtml_branch_coverage=1 00:17:29.356 --rc genhtml_function_coverage=1 00:17:29.356 --rc genhtml_legend=1 00:17:29.356 --rc geninfo_all_blocks=1 00:17:29.356 --rc geninfo_unexecuted_blocks=1 00:17:29.356 00:17:29.356 ' 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:29.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.356 --rc genhtml_branch_coverage=1 00:17:29.356 --rc genhtml_function_coverage=1 00:17:29.356 --rc genhtml_legend=1 00:17:29.356 --rc geninfo_all_blocks=1 00:17:29.356 --rc geninfo_unexecuted_blocks=1 00:17:29.356 00:17:29.356 ' 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:29.356 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:17:29.356 07:13:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:34.850 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:34.850 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:34.850 Found net devices under 0000:86:00.0: cvl_0_0 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:34.850 Found net devices under 0000:86:00.1: cvl_0_1 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:34.850 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:34.851 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:35.111 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:35.111 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:35.111 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:35.111 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:35.111 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:35.111 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:35.111 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:35.111 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:35.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:35.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:17:35.111 00:17:35.111 --- 10.0.0.2 ping statistics --- 00:17:35.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.111 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:17:35.111 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:35.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:35.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:17:35.111 00:17:35.111 --- 10.0.0.1 ping statistics --- 00:17:35.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.111 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:17:35.111 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:35.111 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:17:35.111 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:35.111 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:35.111 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:35.111 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:35.111 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:35.111 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:35.111 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:35.111 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:35.111 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:35.111 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:35.111 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:35.111 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=1199573 00:17:35.111 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 1199573 00:17:35.111 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:35.111 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 1199573 ']' 00:17:35.111 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.111 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:35.111 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.111 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:35.111 07:13:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:35.370 [2024-11-20 07:13:39.690752] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:17:35.370 [2024-11-20 07:13:39.690802] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:35.370 [2024-11-20 07:13:39.779018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:35.370 [2024-11-20 07:13:39.824281] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:35.370 [2024-11-20 07:13:39.824315] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:35.370 [2024-11-20 07:13:39.824322] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:35.370 [2024-11-20 07:13:39.824328] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:35.370 [2024-11-20 07:13:39.824333] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:35.370 [2024-11-20 07:13:39.825500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:35.370 [2024-11-20 07:13:39.825607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:17:35.370 [2024-11-20 07:13:39.825712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:35.370 [2024-11-20 07:13:39.825713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:17:36.305 07:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:36.305 07:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:17:36.305 07:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:36.305 07:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:36.305 07:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:36.305 07:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:36.305 07:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:36.305 07:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.305 07:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:36.305 [2024-11-20 07:13:40.584503] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:36.305 07:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.305 07:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:36.305 07:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.305 07:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:36.305 Malloc0 00:17:36.305 07:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.305 07:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:36.305 07:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.305 07:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:36.305 07:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.305 07:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:36.305 07:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.305 07:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:36.305 07:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.305 07:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:36.305 07:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.305 07:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:36.305 [2024-11-20 07:13:40.628808] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:36.305 07:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.305 07:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:36.305 07:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:36.305 07:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:17:36.305 07:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:17:36.305 07:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:36.305 07:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:36.305 { 00:17:36.305 "params": { 00:17:36.305 "name": "Nvme$subsystem", 00:17:36.305 "trtype": "$TEST_TRANSPORT", 00:17:36.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:36.305 "adrfam": "ipv4", 00:17:36.305 "trsvcid": "$NVMF_PORT", 00:17:36.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:36.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:36.305 "hdgst": ${hdgst:-false}, 00:17:36.305 "ddgst": ${ddgst:-false} 00:17:36.305 }, 00:17:36.305 "method": "bdev_nvme_attach_controller" 00:17:36.305 } 00:17:36.305 EOF 00:17:36.305 )") 00:17:36.305 07:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:17:36.305 07:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:17:36.305 07:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:17:36.305 07:13:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:36.305 "params": { 00:17:36.305 "name": "Nvme1", 00:17:36.305 "trtype": "tcp", 00:17:36.305 "traddr": "10.0.0.2", 00:17:36.305 "adrfam": "ipv4", 00:17:36.305 "trsvcid": "4420", 00:17:36.305 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:36.305 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:36.305 "hdgst": false, 00:17:36.305 "ddgst": false 00:17:36.305 }, 00:17:36.305 "method": "bdev_nvme_attach_controller" 00:17:36.305 }' 00:17:36.305 [2024-11-20 07:13:40.680932] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:17:36.305 [2024-11-20 07:13:40.680983] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1199825 ] 00:17:36.305 [2024-11-20 07:13:40.760918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:36.305 [2024-11-20 07:13:40.809973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:36.305 [2024-11-20 07:13:40.810033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.305 [2024-11-20 07:13:40.810034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:36.564 I/O targets: 00:17:36.564 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:36.564 00:17:36.564 00:17:36.564 CUnit - A unit testing framework for C - Version 2.1-3 00:17:36.564 http://cunit.sourceforge.net/ 00:17:36.564 00:17:36.564 00:17:36.564 Suite: bdevio tests on: Nvme1n1 00:17:36.564 Test: blockdev write read block ...passed 00:17:36.564 Test: blockdev write zeroes read block ...passed 00:17:36.821 Test: blockdev write zeroes read no split ...passed 00:17:36.821 Test: blockdev write zeroes read split ...passed 00:17:36.821 Test: blockdev write zeroes read split partial ...passed 00:17:36.821 Test: blockdev reset ...[2024-11-20 07:13:41.184673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:36.821 [2024-11-20 07:13:41.184737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2301920 (9): Bad file descriptor 00:17:36.821 [2024-11-20 07:13:41.196841] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:17:36.821 passed 00:17:36.821 Test: blockdev write read 8 blocks ...passed 00:17:36.821 Test: blockdev write read size > 128k ...passed 00:17:36.821 Test: blockdev write read invalid size ...passed 00:17:36.821 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:36.821 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:36.821 Test: blockdev write read max offset ...passed 00:17:36.821 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:36.822 Test: blockdev writev readv 8 blocks ...passed 00:17:37.081 Test: blockdev writev readv 30 x 1block ...passed 00:17:37.081 Test: blockdev writev readv block ...passed 00:17:37.081 Test: blockdev writev readv size > 128k ...passed 00:17:37.081 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:37.081 Test: blockdev comparev and writev ...[2024-11-20 07:13:41.449889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:37.081 [2024-11-20 07:13:41.449920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.081 [2024-11-20 07:13:41.449934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:37.081 [2024-11-20 07:13:41.449941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:37.081 [2024-11-20 07:13:41.450177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:37.081 [2024-11-20 07:13:41.450189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:37.081 [2024-11-20 07:13:41.450200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:37.081 [2024-11-20 07:13:41.450208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:37.081 [2024-11-20 07:13:41.450445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:37.081 [2024-11-20 07:13:41.450457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:37.081 [2024-11-20 07:13:41.450469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:37.081 [2024-11-20 07:13:41.450477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:37.081 [2024-11-20 07:13:41.450703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:37.081 [2024-11-20 07:13:41.450714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:37.081 [2024-11-20 07:13:41.450726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:37.081 [2024-11-20 07:13:41.450732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:37.081 passed 00:17:37.081 Test: blockdev nvme passthru rw ...passed 00:17:37.081 Test: blockdev nvme passthru vendor specific ...[2024-11-20 07:13:41.532368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:37.081 [2024-11-20 07:13:41.532385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:37.081 [2024-11-20 07:13:41.532492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:37.081 [2024-11-20 07:13:41.532503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:37.081 [2024-11-20 07:13:41.532600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:37.082 [2024-11-20 07:13:41.532610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:37.082 [2024-11-20 07:13:41.532713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:37.082 [2024-11-20 07:13:41.532727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:37.082 passed 00:17:37.082 Test: blockdev nvme admin passthru ...passed 00:17:37.082 Test: blockdev copy ...passed 00:17:37.082 00:17:37.082 Run Summary: Type Total Ran Passed Failed Inactive 00:17:37.082 suites 1 1 n/a 0 0 00:17:37.082 tests 23 23 23 0 0 00:17:37.082 asserts 152 152 152 0 n/a 00:17:37.082 00:17:37.082 Elapsed time = 1.145 seconds 00:17:37.340 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:37.340 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.340 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:37.340 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.340 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:37.340 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:37.340 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:37.340 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:17:37.340 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:37.340 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:17:37.340 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:37.340 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:37.340 rmmod nvme_tcp 00:17:37.340 rmmod nvme_fabrics 00:17:37.340 rmmod nvme_keyring 00:17:37.599 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:37.599 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:17:37.599 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:17:37.599 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 1199573 ']' 00:17:37.599 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 1199573 00:17:37.599 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 1199573 ']' 00:17:37.599 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 1199573 00:17:37.599 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:17:37.599 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:37.599 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1199573 00:17:37.599 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:17:37.599 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:17:37.599 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1199573' 00:17:37.599 killing process with pid 1199573 00:17:37.599 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 1199573 00:17:37.599 07:13:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 1199573 00:17:37.858 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:37.858 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:37.858 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:37.858 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:17:37.858 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:37.858 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:17:37.858 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:17:37.858 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:37.858 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:37.858 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.858 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:37.858 07:13:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:40.394 00:17:40.394 real 0m10.854s 00:17:40.394 user 0m13.553s 00:17:40.394 sys 0m5.372s 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:40.394 ************************************ 00:17:40.394 END TEST nvmf_bdevio_no_huge 00:17:40.394 ************************************ 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:40.394 ************************************ 00:17:40.394 START TEST nvmf_tls 00:17:40.394 ************************************ 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:40.394 * Looking for test storage... 00:17:40.394 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:40.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.394 --rc genhtml_branch_coverage=1 00:17:40.394 --rc genhtml_function_coverage=1 00:17:40.394 --rc genhtml_legend=1 00:17:40.394 --rc geninfo_all_blocks=1 00:17:40.394 --rc geninfo_unexecuted_blocks=1 00:17:40.394 00:17:40.394 ' 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:40.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.394 --rc genhtml_branch_coverage=1 00:17:40.394 --rc genhtml_function_coverage=1 00:17:40.394 --rc genhtml_legend=1 00:17:40.394 --rc geninfo_all_blocks=1 00:17:40.394 --rc geninfo_unexecuted_blocks=1 00:17:40.394 00:17:40.394 ' 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:40.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.394 --rc genhtml_branch_coverage=1 00:17:40.394 --rc genhtml_function_coverage=1 00:17:40.394 --rc genhtml_legend=1 00:17:40.394 --rc geninfo_all_blocks=1 00:17:40.394 --rc geninfo_unexecuted_blocks=1 00:17:40.394 00:17:40.394 ' 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:40.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.394 --rc genhtml_branch_coverage=1 00:17:40.394 --rc genhtml_function_coverage=1 00:17:40.394 --rc genhtml_legend=1 00:17:40.394 --rc geninfo_all_blocks=1 00:17:40.394 --rc geninfo_unexecuted_blocks=1 00:17:40.394 00:17:40.394 ' 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:17:40.394 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:40.395 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:40.395 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:40.395 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.395 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.395 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.395 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:40.395 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.395 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:17:40.395 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:40.395 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:40.395 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:40.395 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:40.395 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:40.395 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:40.395 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:40.395 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:40.395 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:40.395 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:40.395 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:40.395 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:17:40.395 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:40.395 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:40.395 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:40.395 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:40.395 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:40.395 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.395 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:40.395 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.395 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:40.395 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:40.395 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:17:40.395 07:13:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:46.965 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:46.965 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:46.965 Found net devices under 0000:86:00.0: cvl_0_0 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:46.965 Found net devices under 0000:86:00.1: cvl_0_1 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:46.965 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:46.966 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:46.966 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:46.966 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:46.966 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:17:46.966 00:17:46.966 --- 10.0.0.2 ping statistics --- 00:17:46.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.966 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:17:46.966 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:46.966 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:46.966 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:17:46.966 00:17:46.966 --- 10.0.0.1 ping statistics --- 00:17:46.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.966 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:17:46.966 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:46.966 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:17:46.966 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:46.966 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:46.966 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:46.966 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:46.966 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:46.966 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:46.966 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:46.966 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:46.966 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:46.966 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:46.966 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:46.966 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1203586 00:17:46.966 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1203586 00:17:46.966 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:46.966 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1203586 ']' 00:17:46.966 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.966 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:46.966 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.966 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:46.966 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:46.966 [2024-11-20 07:13:50.644537] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:17:46.966 [2024-11-20 07:13:50.644594] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:46.966 [2024-11-20 07:13:50.725967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.966 [2024-11-20 07:13:50.765933] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:46.966 [2024-11-20 07:13:50.765970] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:46.966 [2024-11-20 07:13:50.765978] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:46.966 [2024-11-20 07:13:50.765983] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:46.966 [2024-11-20 07:13:50.765988] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:46.966 [2024-11-20 07:13:50.766521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:46.966 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:46.966 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:17:46.966 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:46.966 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:46.966 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:46.966 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:46.966 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:17:46.966 07:13:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:46.966 true 00:17:46.966 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:46.966 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:17:46.966 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:17:46.966 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:17:46.966 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:46.966 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:46.966 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:17:47.225 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:17:47.225 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:17:47.225 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:47.485 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:47.485 07:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:17:47.485 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:17:47.486 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:17:47.486 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:47.486 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:17:47.746 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:17:47.746 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:17:47.746 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:48.006 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:17:48.006 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:48.265 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:17:48.265 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:17:48.265 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:48.265 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:48.265 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:17:48.524 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:17:48.524 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:17:48.524 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:48.524 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:48.524 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:48.524 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:48.524 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:17:48.524 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:48.524 07:13:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:48.524 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:48.524 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:48.524 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:48.524 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:48.524 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:48.524 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:17:48.524 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:48.524 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:48.524 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:48.524 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:48.524 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.bG9J9usPLg 00:17:48.524 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:17:48.783 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.watr5N2hsK 00:17:48.783 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:48.783 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:48.783 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.bG9J9usPLg 00:17:48.783 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.watr5N2hsK 00:17:48.783 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:48.783 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:49.042 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.bG9J9usPLg 00:17:49.042 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.bG9J9usPLg 00:17:49.043 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:49.301 [2024-11-20 07:13:53.723393] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:49.301 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:49.560 07:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:49.561 [2024-11-20 07:13:54.108377] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:49.561 [2024-11-20 07:13:54.108602] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:49.819 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:49.819 malloc0 00:17:49.819 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:50.078 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.bG9J9usPLg 00:17:50.337 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:50.596 07:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.bG9J9usPLg 00:18:00.574 Initializing NVMe Controllers 00:18:00.574 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:00.574 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:00.574 Initialization complete. Launching workers. 00:18:00.574 ======================================================== 00:18:00.574 Latency(us) 00:18:00.574 Device Information : IOPS MiB/s Average min max 00:18:00.574 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16416.50 64.13 3898.61 810.88 4682.58 00:18:00.574 ======================================================== 00:18:00.574 Total : 16416.50 64.13 3898.61 810.88 4682.58 00:18:00.574 00:18:00.575 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.bG9J9usPLg 00:18:00.575 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:00.575 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:00.575 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:00.575 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.bG9J9usPLg 00:18:00.575 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:00.575 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1205933 00:18:00.575 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:00.575 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:00.575 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1205933 /var/tmp/bdevperf.sock 00:18:00.575 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1205933 ']' 00:18:00.575 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:00.575 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:00.575 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:00.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:00.575 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:00.575 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.575 [2024-11-20 07:14:05.095509] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:18:00.575 [2024-11-20 07:14:05.095556] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1205933 ] 00:18:00.833 [2024-11-20 07:14:05.168287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.833 [2024-11-20 07:14:05.208340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:00.833 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:00.833 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:00.834 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.bG9J9usPLg 00:18:01.092 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:01.352 [2024-11-20 07:14:05.697474] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:01.352 TLSTESTn1 00:18:01.352 07:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:01.352 Running I/O for 10 seconds... 00:18:03.667 4728.00 IOPS, 18.47 MiB/s [2024-11-20T06:14:09.159Z] 5174.00 IOPS, 20.21 MiB/s [2024-11-20T06:14:10.102Z] 5278.00 IOPS, 20.62 MiB/s [2024-11-20T06:14:11.039Z] 5359.25 IOPS, 20.93 MiB/s [2024-11-20T06:14:11.974Z] 5383.80 IOPS, 21.03 MiB/s [2024-11-20T06:14:13.350Z] 5409.83 IOPS, 21.13 MiB/s [2024-11-20T06:14:13.917Z] 5419.29 IOPS, 21.17 MiB/s [2024-11-20T06:14:15.293Z] 5418.25 IOPS, 21.17 MiB/s [2024-11-20T06:14:16.230Z] 5417.56 IOPS, 21.16 MiB/s [2024-11-20T06:14:16.230Z] 5415.80 IOPS, 21.16 MiB/s 00:18:11.674 Latency(us) 00:18:11.674 [2024-11-20T06:14:16.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.674 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:11.674 Verification LBA range: start 0x0 length 0x2000 00:18:11.674 TLSTESTn1 : 10.01 5421.64 21.18 0.00 0.00 23574.94 4843.97 64282.27 00:18:11.674 [2024-11-20T06:14:16.230Z] =================================================================================================================== 00:18:11.674 [2024-11-20T06:14:16.230Z] Total : 5421.64 21.18 0.00 0.00 23574.94 4843.97 64282.27 00:18:11.674 { 00:18:11.674 "results": [ 00:18:11.674 { 00:18:11.674 "job": "TLSTESTn1", 00:18:11.674 "core_mask": "0x4", 00:18:11.674 "workload": "verify", 00:18:11.674 "status": "finished", 00:18:11.674 "verify_range": { 00:18:11.674 "start": 0, 00:18:11.674 "length": 8192 00:18:11.674 }, 00:18:11.674 "queue_depth": 128, 00:18:11.674 "io_size": 4096, 00:18:11.674 "runtime": 10.012838, 00:18:11.674 "iops": 5421.639698954482, 00:18:11.674 "mibps": 21.178280074040945, 00:18:11.674 "io_failed": 0, 00:18:11.674 "io_timeout": 0, 00:18:11.674 "avg_latency_us": 23574.94121908283, 00:18:11.674 "min_latency_us": 4843.965217391305, 00:18:11.674 "max_latency_us": 64282.26782608696 00:18:11.674 } 00:18:11.674 ], 00:18:11.674 "core_count": 1 00:18:11.674 } 00:18:11.674 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:11.674 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1205933 00:18:11.674 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1205933 ']' 00:18:11.674 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1205933 00:18:11.674 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:11.674 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:11.674 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1205933 00:18:11.674 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:11.674 07:14:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:11.674 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1205933' 00:18:11.674 killing process with pid 1205933 00:18:11.674 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1205933 00:18:11.674 Received shutdown signal, test time was about 10.000000 seconds 00:18:11.674 00:18:11.674 Latency(us) 00:18:11.674 [2024-11-20T06:14:16.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.674 [2024-11-20T06:14:16.230Z] =================================================================================================================== 00:18:11.674 [2024-11-20T06:14:16.230Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:11.675 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1205933 00:18:11.675 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.watr5N2hsK 00:18:11.675 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:11.675 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.watr5N2hsK 00:18:11.675 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:11.675 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:11.675 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:11.675 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:11.675 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.watr5N2hsK 00:18:11.675 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:11.675 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:11.675 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:11.675 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.watr5N2hsK 00:18:11.675 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:11.675 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1207767 00:18:11.675 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:11.675 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:11.675 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1207767 /var/tmp/bdevperf.sock 00:18:11.675 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1207767 ']' 00:18:11.675 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:11.675 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:11.675 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:11.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:11.675 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:11.675 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:11.675 [2024-11-20 07:14:16.211522] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:18:11.675 [2024-11-20 07:14:16.211573] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1207767 ] 00:18:11.934 [2024-11-20 07:14:16.286631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.934 [2024-11-20 07:14:16.324453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:11.934 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:11.934 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:11.934 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.watr5N2hsK 00:18:12.192 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:12.451 [2024-11-20 07:14:16.792295] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:12.451 [2024-11-20 07:14:16.797212] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:12.451 [2024-11-20 07:14:16.797667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1501170 (107): Transport endpoint is not connected 00:18:12.451 [2024-11-20 07:14:16.798660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1501170 (9): Bad file descriptor 00:18:12.451 [2024-11-20 07:14:16.799661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:12.451 [2024-11-20 07:14:16.799672] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:12.451 [2024-11-20 07:14:16.799679] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:12.451 [2024-11-20 07:14:16.799689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:12.451 request: 00:18:12.451 { 00:18:12.451 "name": "TLSTEST", 00:18:12.451 "trtype": "tcp", 00:18:12.451 "traddr": "10.0.0.2", 00:18:12.451 "adrfam": "ipv4", 00:18:12.451 "trsvcid": "4420", 00:18:12.451 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:12.451 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:12.451 "prchk_reftag": false, 00:18:12.451 "prchk_guard": false, 00:18:12.451 "hdgst": false, 00:18:12.451 "ddgst": false, 00:18:12.451 "psk": "key0", 00:18:12.451 "allow_unrecognized_csi": false, 00:18:12.451 "method": "bdev_nvme_attach_controller", 00:18:12.451 "req_id": 1 00:18:12.451 } 00:18:12.451 Got JSON-RPC error response 00:18:12.451 response: 00:18:12.451 { 00:18:12.451 "code": -5, 00:18:12.451 "message": "Input/output error" 00:18:12.451 } 00:18:12.451 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1207767 00:18:12.451 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1207767 ']' 00:18:12.451 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1207767 00:18:12.451 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:12.451 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:12.451 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1207767 00:18:12.451 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:12.451 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:12.451 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1207767' 00:18:12.452 killing process with pid 1207767 00:18:12.452 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1207767 00:18:12.452 Received shutdown signal, test time was about 10.000000 seconds 00:18:12.452 00:18:12.452 Latency(us) 00:18:12.452 [2024-11-20T06:14:17.008Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.452 [2024-11-20T06:14:17.008Z] =================================================================================================================== 00:18:12.452 [2024-11-20T06:14:17.008Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:12.452 07:14:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1207767 00:18:12.711 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:12.711 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:12.711 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:12.711 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:12.711 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:12.711 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.bG9J9usPLg 00:18:12.711 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:12.711 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.bG9J9usPLg 00:18:12.711 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:12.711 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:12.711 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:12.711 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:12.711 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.bG9J9usPLg 00:18:12.711 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:12.711 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:12.711 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:12.711 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.bG9J9usPLg 00:18:12.711 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:12.711 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1207937 00:18:12.711 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:12.711 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1207937 /var/tmp/bdevperf.sock 00:18:12.711 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1207937 ']' 00:18:12.711 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:12.711 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:12.711 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:12.711 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:12.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:12.711 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:12.711 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:12.711 [2024-11-20 07:14:17.082119] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:18:12.711 [2024-11-20 07:14:17.082169] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1207937 ] 00:18:12.711 [2024-11-20 07:14:17.156802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.711 [2024-11-20 07:14:17.195976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:12.971 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:12.971 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:12.971 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.bG9J9usPLg 00:18:12.971 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:13.230 [2024-11-20 07:14:17.652682] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:13.230 [2024-11-20 07:14:17.659302] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:13.230 [2024-11-20 07:14:17.659324] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:13.230 [2024-11-20 07:14:17.659346] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:13.230 [2024-11-20 07:14:17.660167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa19170 (107): Transport endpoint is not connected 00:18:13.230 [2024-11-20 07:14:17.661162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa19170 (9): Bad file descriptor 00:18:13.230 [2024-11-20 07:14:17.662163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:13.230 [2024-11-20 07:14:17.662174] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:13.230 [2024-11-20 07:14:17.662181] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:13.230 [2024-11-20 07:14:17.662191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:13.230 request: 00:18:13.230 { 00:18:13.230 "name": "TLSTEST", 00:18:13.230 "trtype": "tcp", 00:18:13.230 "traddr": "10.0.0.2", 00:18:13.230 "adrfam": "ipv4", 00:18:13.230 "trsvcid": "4420", 00:18:13.230 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.230 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:13.230 "prchk_reftag": false, 00:18:13.230 "prchk_guard": false, 00:18:13.230 "hdgst": false, 00:18:13.230 "ddgst": false, 00:18:13.230 "psk": "key0", 00:18:13.230 "allow_unrecognized_csi": false, 00:18:13.230 "method": "bdev_nvme_attach_controller", 00:18:13.230 "req_id": 1 00:18:13.230 } 00:18:13.230 Got JSON-RPC error response 00:18:13.230 response: 00:18:13.230 { 00:18:13.230 "code": -5, 00:18:13.230 "message": "Input/output error" 00:18:13.230 } 00:18:13.230 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1207937 00:18:13.230 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1207937 ']' 00:18:13.230 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1207937 00:18:13.230 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:13.230 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:13.230 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1207937 00:18:13.230 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:13.230 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:13.230 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1207937' 00:18:13.230 killing process with pid 1207937 00:18:13.230 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1207937 00:18:13.230 Received shutdown signal, test time was about 10.000000 seconds 00:18:13.230 00:18:13.230 Latency(us) 00:18:13.230 [2024-11-20T06:14:17.786Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.230 [2024-11-20T06:14:17.786Z] =================================================================================================================== 00:18:13.230 [2024-11-20T06:14:17.786Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:13.230 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1207937 00:18:13.489 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:13.489 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:13.489 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:13.489 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:13.489 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:13.489 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.bG9J9usPLg 00:18:13.489 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:13.489 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.bG9J9usPLg 00:18:13.489 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:13.489 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:13.489 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:13.489 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:13.489 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.bG9J9usPLg 00:18:13.489 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:13.489 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:13.489 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:13.489 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.bG9J9usPLg 00:18:13.489 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:13.489 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1208016 00:18:13.489 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:13.489 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:13.489 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1208016 /var/tmp/bdevperf.sock 00:18:13.489 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1208016 ']' 00:18:13.489 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:13.489 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:13.489 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:13.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:13.489 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:13.489 07:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:13.489 [2024-11-20 07:14:17.941699] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:18:13.490 [2024-11-20 07:14:17.941744] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1208016 ] 00:18:13.490 [2024-11-20 07:14:18.014748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.747 [2024-11-20 07:14:18.052486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:13.747 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:13.747 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:13.747 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.bG9J9usPLg 00:18:14.004 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:14.004 [2024-11-20 07:14:18.544329] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:14.004 [2024-11-20 07:14:18.551595] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:14.004 [2024-11-20 07:14:18.551616] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:14.004 [2024-11-20 07:14:18.551639] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:14.004 [2024-11-20 07:14:18.552614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b0d170 (107): Transport endpoint is not connected 00:18:14.004 [2024-11-20 07:14:18.553608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b0d170 (9): Bad file descriptor 00:18:14.004 [2024-11-20 07:14:18.554609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:18:14.004 [2024-11-20 07:14:18.554620] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:14.262 [2024-11-20 07:14:18.554629] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:14.262 [2024-11-20 07:14:18.554640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:18:14.262 request: 00:18:14.262 { 00:18:14.262 "name": "TLSTEST", 00:18:14.262 "trtype": "tcp", 00:18:14.262 "traddr": "10.0.0.2", 00:18:14.262 "adrfam": "ipv4", 00:18:14.262 "trsvcid": "4420", 00:18:14.262 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:14.262 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:14.262 "prchk_reftag": false, 00:18:14.262 "prchk_guard": false, 00:18:14.262 "hdgst": false, 00:18:14.262 "ddgst": false, 00:18:14.262 "psk": "key0", 00:18:14.262 "allow_unrecognized_csi": false, 00:18:14.262 "method": "bdev_nvme_attach_controller", 00:18:14.262 "req_id": 1 00:18:14.262 } 00:18:14.262 Got JSON-RPC error response 00:18:14.262 response: 00:18:14.262 { 00:18:14.262 "code": -5, 00:18:14.262 "message": "Input/output error" 00:18:14.262 } 00:18:14.262 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1208016 00:18:14.262 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1208016 ']' 00:18:14.262 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1208016 00:18:14.262 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:14.262 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:14.262 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1208016 00:18:14.262 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:14.262 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:14.262 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1208016' 00:18:14.262 killing process with pid 1208016 00:18:14.262 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1208016 00:18:14.262 Received shutdown signal, test time was about 10.000000 seconds 00:18:14.262 00:18:14.262 Latency(us) 00:18:14.262 [2024-11-20T06:14:18.818Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.262 [2024-11-20T06:14:18.818Z] =================================================================================================================== 00:18:14.262 [2024-11-20T06:14:18.818Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:14.262 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1208016 00:18:14.262 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:14.262 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:14.263 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:14.263 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:14.263 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:14.263 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:14.263 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:14.263 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:14.263 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:14.263 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:14.263 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:14.263 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:14.263 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:14.263 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:14.263 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:14.263 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:14.263 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:14.263 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:14.263 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1208251 00:18:14.263 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:14.263 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:14.263 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1208251 /var/tmp/bdevperf.sock 00:18:14.263 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1208251 ']' 00:18:14.263 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:14.263 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:14.263 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:14.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:14.263 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:14.263 07:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.522 [2024-11-20 07:14:18.836615] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:18:14.522 [2024-11-20 07:14:18.836665] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1208251 ] 00:18:14.522 [2024-11-20 07:14:18.901695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.522 [2024-11-20 07:14:18.939359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:14.522 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:14.522 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:14.522 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:14.781 [2024-11-20 07:14:19.207216] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:14.781 [2024-11-20 07:14:19.207249] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:14.781 request: 00:18:14.781 { 00:18:14.781 "name": "key0", 00:18:14.781 "path": "", 00:18:14.781 "method": "keyring_file_add_key", 00:18:14.781 "req_id": 1 00:18:14.781 } 00:18:14.781 Got JSON-RPC error response 00:18:14.781 response: 00:18:14.781 { 00:18:14.781 "code": -1, 00:18:14.781 "message": "Operation not permitted" 00:18:14.781 } 00:18:14.781 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:15.041 [2024-11-20 07:14:19.403822] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:15.041 [2024-11-20 07:14:19.403850] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:15.041 request: 00:18:15.041 { 00:18:15.041 "name": "TLSTEST", 00:18:15.041 "trtype": "tcp", 00:18:15.041 "traddr": "10.0.0.2", 00:18:15.041 "adrfam": "ipv4", 00:18:15.041 "trsvcid": "4420", 00:18:15.041 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.041 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:15.041 "prchk_reftag": false, 00:18:15.041 "prchk_guard": false, 00:18:15.041 "hdgst": false, 00:18:15.041 "ddgst": false, 00:18:15.041 "psk": "key0", 00:18:15.041 "allow_unrecognized_csi": false, 00:18:15.041 "method": "bdev_nvme_attach_controller", 00:18:15.041 "req_id": 1 00:18:15.041 } 00:18:15.041 Got JSON-RPC error response 00:18:15.041 response: 00:18:15.041 { 00:18:15.041 "code": -126, 00:18:15.041 "message": "Required key not available" 00:18:15.041 } 00:18:15.041 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1208251 00:18:15.041 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1208251 ']' 00:18:15.041 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1208251 00:18:15.041 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:15.041 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:15.041 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1208251 00:18:15.041 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:15.041 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:15.041 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1208251' 00:18:15.041 killing process with pid 1208251 00:18:15.041 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1208251 00:18:15.041 Received shutdown signal, test time was about 10.000000 seconds 00:18:15.041 00:18:15.041 Latency(us) 00:18:15.041 [2024-11-20T06:14:19.597Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.041 [2024-11-20T06:14:19.597Z] =================================================================================================================== 00:18:15.041 [2024-11-20T06:14:19.597Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:15.041 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1208251 00:18:15.300 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:15.300 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:15.300 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:15.300 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:15.300 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:15.300 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1203586 00:18:15.300 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1203586 ']' 00:18:15.300 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1203586 00:18:15.300 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:15.300 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:15.300 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1203586 00:18:15.300 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:15.300 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:15.300 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1203586' 00:18:15.300 killing process with pid 1203586 00:18:15.300 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1203586 00:18:15.300 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1203586 00:18:15.558 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:15.558 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:15.558 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:15.558 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:15.558 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:15.558 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:15.558 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:15.558 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:15.558 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:15.558 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.lWWn6jfH0s 00:18:15.558 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:15.558 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.lWWn6jfH0s 00:18:15.558 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:15.558 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:15.558 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:15.558 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.558 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1208468 00:18:15.558 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:15.558 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1208468 00:18:15.558 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1208468 ']' 00:18:15.558 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.559 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:15.559 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.559 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:15.559 07:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.559 [2024-11-20 07:14:19.951717] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:18:15.559 [2024-11-20 07:14:19.951766] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.559 [2024-11-20 07:14:20.034045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.559 [2024-11-20 07:14:20.076501] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:15.559 [2024-11-20 07:14:20.076540] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:15.559 [2024-11-20 07:14:20.076548] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:15.559 [2024-11-20 07:14:20.076555] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:15.559 [2024-11-20 07:14:20.076560] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:15.559 [2024-11-20 07:14:20.077127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:15.817 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:15.817 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:15.817 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:15.817 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:15.817 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.817 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:15.817 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.lWWn6jfH0s 00:18:15.817 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.lWWn6jfH0s 00:18:15.817 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:16.074 [2024-11-20 07:14:20.398204] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:16.074 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:16.074 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:16.332 [2024-11-20 07:14:20.779203] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:16.332 [2024-11-20 07:14:20.779406] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:16.332 07:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:16.590 malloc0 00:18:16.590 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:16.854 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.lWWn6jfH0s 00:18:17.112 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:17.112 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lWWn6jfH0s 00:18:17.112 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:17.112 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:17.112 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:17.112 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.lWWn6jfH0s 00:18:17.112 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:17.112 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:17.112 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1208756 00:18:17.112 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:17.112 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1208756 /var/tmp/bdevperf.sock 00:18:17.112 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1208756 ']' 00:18:17.112 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:17.112 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:17.112 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:17.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:17.112 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:17.112 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.371 [2024-11-20 07:14:21.665172] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:18:17.371 [2024-11-20 07:14:21.665223] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1208756 ] 00:18:17.371 [2024-11-20 07:14:21.741990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.371 [2024-11-20 07:14:21.782733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:17.371 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:17.371 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:17.371 07:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lWWn6jfH0s 00:18:17.629 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:17.890 [2024-11-20 07:14:22.242794] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:17.890 TLSTESTn1 00:18:17.890 07:14:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:17.890 Running I/O for 10 seconds... 00:18:20.200 5353.00 IOPS, 20.91 MiB/s [2024-11-20T06:14:25.690Z] 5308.50 IOPS, 20.74 MiB/s [2024-11-20T06:14:26.627Z] 5299.00 IOPS, 20.70 MiB/s [2024-11-20T06:14:27.566Z] 5339.00 IOPS, 20.86 MiB/s [2024-11-20T06:14:28.507Z] 5363.40 IOPS, 20.95 MiB/s [2024-11-20T06:14:29.447Z] 5368.83 IOPS, 20.97 MiB/s [2024-11-20T06:14:30.823Z] 5364.00 IOPS, 20.95 MiB/s [2024-11-20T06:14:31.756Z] 5315.25 IOPS, 20.76 MiB/s [2024-11-20T06:14:32.693Z] 5241.89 IOPS, 20.48 MiB/s [2024-11-20T06:14:32.693Z] 5216.20 IOPS, 20.38 MiB/s 00:18:28.137 Latency(us) 00:18:28.137 [2024-11-20T06:14:32.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.137 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:28.137 Verification LBA range: start 0x0 length 0x2000 00:18:28.137 TLSTESTn1 : 10.02 5219.98 20.39 0.00 0.00 24484.40 4872.46 31229.33 00:18:28.137 [2024-11-20T06:14:32.693Z] =================================================================================================================== 00:18:28.137 [2024-11-20T06:14:32.693Z] Total : 5219.98 20.39 0.00 0.00 24484.40 4872.46 31229.33 00:18:28.137 { 00:18:28.137 "results": [ 00:18:28.137 { 00:18:28.137 "job": "TLSTESTn1", 00:18:28.137 "core_mask": "0x4", 00:18:28.137 "workload": "verify", 00:18:28.137 "status": "finished", 00:18:28.137 "verify_range": { 00:18:28.137 "start": 0, 00:18:28.137 "length": 8192 00:18:28.137 }, 00:18:28.137 "queue_depth": 128, 00:18:28.137 "io_size": 4096, 00:18:28.137 "runtime": 10.017085, 00:18:28.137 "iops": 5219.981661331615, 00:18:28.137 "mibps": 20.390553364576622, 00:18:28.137 "io_failed": 0, 00:18:28.137 "io_timeout": 0, 00:18:28.137 "avg_latency_us": 24484.401134497486, 00:18:28.137 "min_latency_us": 4872.459130434782, 00:18:28.137 "max_latency_us": 31229.328695652173 00:18:28.137 } 00:18:28.137 ], 00:18:28.137 "core_count": 1 00:18:28.137 } 00:18:28.137 07:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:28.137 07:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1208756 00:18:28.137 07:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1208756 ']' 00:18:28.137 07:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1208756 00:18:28.137 07:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:28.137 07:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:28.137 07:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1208756 00:18:28.137 07:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:28.137 07:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:28.137 07:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1208756' 00:18:28.137 killing process with pid 1208756 00:18:28.137 07:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1208756 00:18:28.137 Received shutdown signal, test time was about 10.000000 seconds 00:18:28.137 00:18:28.137 Latency(us) 00:18:28.137 [2024-11-20T06:14:32.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.137 [2024-11-20T06:14:32.693Z] =================================================================================================================== 00:18:28.137 [2024-11-20T06:14:32.693Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:28.137 07:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1208756 00:18:28.396 07:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.lWWn6jfH0s 00:18:28.396 07:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lWWn6jfH0s 00:18:28.396 07:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:28.396 07:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lWWn6jfH0s 00:18:28.396 07:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:18:28.396 07:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:28.396 07:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:18:28.396 07:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:28.396 07:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lWWn6jfH0s 00:18:28.396 07:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:28.396 07:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:28.396 07:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:28.397 07:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.lWWn6jfH0s 00:18:28.397 07:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:28.397 07:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1210582 00:18:28.397 07:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:28.397 07:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:28.397 07:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1210582 /var/tmp/bdevperf.sock 00:18:28.397 07:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1210582 ']' 00:18:28.397 07:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:28.397 07:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:28.397 07:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:28.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:28.397 07:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:28.397 07:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.397 [2024-11-20 07:14:32.754683] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:18:28.397 [2024-11-20 07:14:32.754732] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1210582 ] 00:18:28.397 [2024-11-20 07:14:32.828825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.397 [2024-11-20 07:14:32.871317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:28.655 07:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:28.655 07:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:28.655 07:14:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lWWn6jfH0s 00:18:28.655 [2024-11-20 07:14:33.134817] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.lWWn6jfH0s': 0100666 00:18:28.655 [2024-11-20 07:14:33.134845] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:28.655 request: 00:18:28.655 { 00:18:28.655 "name": "key0", 00:18:28.655 "path": "/tmp/tmp.lWWn6jfH0s", 00:18:28.655 "method": "keyring_file_add_key", 00:18:28.655 "req_id": 1 00:18:28.655 } 00:18:28.655 Got JSON-RPC error response 00:18:28.655 response: 00:18:28.655 { 00:18:28.655 "code": -1, 00:18:28.655 "message": "Operation not permitted" 00:18:28.655 } 00:18:28.655 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:28.914 [2024-11-20 07:14:33.311355] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:28.914 [2024-11-20 07:14:33.311386] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:28.914 request: 00:18:28.914 { 00:18:28.914 "name": "TLSTEST", 00:18:28.914 "trtype": "tcp", 00:18:28.914 "traddr": "10.0.0.2", 00:18:28.914 "adrfam": "ipv4", 00:18:28.914 "trsvcid": "4420", 00:18:28.914 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.914 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:28.914 "prchk_reftag": false, 00:18:28.914 "prchk_guard": false, 00:18:28.914 "hdgst": false, 00:18:28.914 "ddgst": false, 00:18:28.914 "psk": "key0", 00:18:28.914 "allow_unrecognized_csi": false, 00:18:28.914 "method": "bdev_nvme_attach_controller", 00:18:28.914 "req_id": 1 00:18:28.914 } 00:18:28.914 Got JSON-RPC error response 00:18:28.914 response: 00:18:28.914 { 00:18:28.914 "code": -126, 00:18:28.914 "message": "Required key not available" 00:18:28.914 } 00:18:28.914 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1210582 00:18:28.914 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1210582 ']' 00:18:28.914 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1210582 00:18:28.914 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:28.914 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:28.914 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1210582 00:18:28.914 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:28.914 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:28.914 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1210582' 00:18:28.914 killing process with pid 1210582 00:18:28.914 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1210582 00:18:28.914 Received shutdown signal, test time was about 10.000000 seconds 00:18:28.914 00:18:28.914 Latency(us) 00:18:28.914 [2024-11-20T06:14:33.470Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.914 [2024-11-20T06:14:33.470Z] =================================================================================================================== 00:18:28.914 [2024-11-20T06:14:33.470Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:28.914 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1210582 00:18:29.173 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:29.173 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:29.173 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:29.173 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:29.173 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:29.173 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1208468 00:18:29.173 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1208468 ']' 00:18:29.173 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1208468 00:18:29.173 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:29.173 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:29.173 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1208468 00:18:29.173 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:29.173 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:29.173 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1208468' 00:18:29.173 killing process with pid 1208468 00:18:29.173 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1208468 00:18:29.173 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1208468 00:18:29.432 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:18:29.432 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:29.432 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:29.432 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:29.432 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1210626 00:18:29.432 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:29.432 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1210626 00:18:29.432 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1210626 ']' 00:18:29.432 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.432 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:29.432 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.432 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:29.432 07:14:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:29.432 [2024-11-20 07:14:33.813473] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:18:29.432 [2024-11-20 07:14:33.813521] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:29.432 [2024-11-20 07:14:33.890998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.433 [2024-11-20 07:14:33.932062] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:29.433 [2024-11-20 07:14:33.932097] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:29.433 [2024-11-20 07:14:33.932105] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:29.433 [2024-11-20 07:14:33.932112] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:29.433 [2024-11-20 07:14:33.932117] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:29.433 [2024-11-20 07:14:33.932702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:29.691 07:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:29.691 07:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:29.691 07:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:29.691 07:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:29.691 07:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:29.691 07:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:29.691 07:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.lWWn6jfH0s 00:18:29.691 07:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:29.691 07:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.lWWn6jfH0s 00:18:29.691 07:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:18:29.691 07:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:29.691 07:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:18:29.691 07:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:29.691 07:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.lWWn6jfH0s 00:18:29.691 07:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.lWWn6jfH0s 00:18:29.691 07:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:29.950 [2024-11-20 07:14:34.258471] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:29.950 07:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:29.950 07:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:30.209 [2024-11-20 07:14:34.623387] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:30.209 [2024-11-20 07:14:34.623577] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:30.209 07:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:30.467 malloc0 00:18:30.467 07:14:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:30.467 07:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.lWWn6jfH0s 00:18:30.725 [2024-11-20 07:14:35.180907] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.lWWn6jfH0s': 0100666 00:18:30.725 [2024-11-20 07:14:35.180939] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:30.725 request: 00:18:30.725 { 00:18:30.725 "name": "key0", 00:18:30.725 "path": "/tmp/tmp.lWWn6jfH0s", 00:18:30.725 "method": "keyring_file_add_key", 00:18:30.725 "req_id": 1 00:18:30.725 } 00:18:30.725 Got JSON-RPC error response 00:18:30.725 response: 00:18:30.726 { 00:18:30.726 "code": -1, 00:18:30.726 "message": "Operation not permitted" 00:18:30.726 } 00:18:30.726 07:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:30.984 [2024-11-20 07:14:35.369426] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:18:30.984 [2024-11-20 07:14:35.369464] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:30.984 request: 00:18:30.984 { 00:18:30.984 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.984 "host": "nqn.2016-06.io.spdk:host1", 00:18:30.984 "psk": "key0", 00:18:30.984 "method": "nvmf_subsystem_add_host", 00:18:30.984 "req_id": 1 00:18:30.984 } 00:18:30.984 Got JSON-RPC error response 00:18:30.984 response: 00:18:30.984 { 00:18:30.984 "code": -32603, 00:18:30.984 "message": "Internal error" 00:18:30.984 } 00:18:30.984 07:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:30.984 07:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:30.984 07:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:30.984 07:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:30.984 07:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1210626 00:18:30.984 07:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1210626 ']' 00:18:30.984 07:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1210626 00:18:30.984 07:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:30.984 07:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:30.984 07:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1210626 00:18:30.985 07:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:30.985 07:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:30.985 07:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1210626' 00:18:30.985 killing process with pid 1210626 00:18:30.985 07:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1210626 00:18:30.985 07:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1210626 00:18:31.244 07:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.lWWn6jfH0s 00:18:31.244 07:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:18:31.244 07:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:31.244 07:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:31.244 07:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.244 07:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1211098 00:18:31.244 07:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1211098 00:18:31.244 07:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:31.244 07:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1211098 ']' 00:18:31.244 07:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.244 07:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:31.244 07:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.244 07:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:31.244 07:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.244 [2024-11-20 07:14:35.670405] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:18:31.244 [2024-11-20 07:14:35.670449] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.244 [2024-11-20 07:14:35.743738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.244 [2024-11-20 07:14:35.784692] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:31.244 [2024-11-20 07:14:35.784728] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:31.244 [2024-11-20 07:14:35.784735] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:31.244 [2024-11-20 07:14:35.784742] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:31.244 [2024-11-20 07:14:35.784747] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:31.244 [2024-11-20 07:14:35.785288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.503 07:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:31.503 07:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:31.503 07:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:31.503 07:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:31.503 07:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.503 07:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:31.503 07:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.lWWn6jfH0s 00:18:31.503 07:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.lWWn6jfH0s 00:18:31.503 07:14:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:31.761 [2024-11-20 07:14:36.090828] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:31.761 07:14:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:32.019 07:14:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:32.019 [2024-11-20 07:14:36.475818] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:32.019 [2024-11-20 07:14:36.476016] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:32.019 07:14:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:32.277 malloc0 00:18:32.277 07:14:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:32.535 07:14:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.lWWn6jfH0s 00:18:32.794 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:32.794 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1211355 00:18:32.794 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:32.794 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:32.794 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1211355 /var/tmp/bdevperf.sock 00:18:32.794 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1211355 ']' 00:18:32.794 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:32.794 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:32.794 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:32.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:32.794 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:32.794 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.052 [2024-11-20 07:14:37.346860] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:18:33.053 [2024-11-20 07:14:37.346908] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1211355 ] 00:18:33.053 [2024-11-20 07:14:37.419668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.053 [2024-11-20 07:14:37.460259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:33.053 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:33.053 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:33.053 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lWWn6jfH0s 00:18:33.311 07:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:33.569 [2024-11-20 07:14:37.928859] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:33.569 TLSTESTn1 00:18:33.569 07:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:33.828 07:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:18:33.828 "subsystems": [ 00:18:33.828 { 00:18:33.828 "subsystem": "keyring", 00:18:33.828 "config": [ 00:18:33.828 { 00:18:33.828 "method": "keyring_file_add_key", 00:18:33.828 "params": { 00:18:33.828 "name": "key0", 00:18:33.828 "path": "/tmp/tmp.lWWn6jfH0s" 00:18:33.828 } 00:18:33.828 } 00:18:33.828 ] 00:18:33.828 }, 00:18:33.828 { 00:18:33.828 "subsystem": "iobuf", 00:18:33.828 "config": [ 00:18:33.828 { 00:18:33.828 "method": "iobuf_set_options", 00:18:33.828 "params": { 00:18:33.828 "small_pool_count": 8192, 00:18:33.828 "large_pool_count": 1024, 00:18:33.828 "small_bufsize": 8192, 00:18:33.828 "large_bufsize": 135168, 00:18:33.828 "enable_numa": false 00:18:33.828 } 00:18:33.828 } 00:18:33.828 ] 00:18:33.828 }, 00:18:33.828 { 00:18:33.828 "subsystem": "sock", 00:18:33.828 "config": [ 00:18:33.828 { 00:18:33.828 "method": "sock_set_default_impl", 00:18:33.828 "params": { 00:18:33.828 "impl_name": "posix" 00:18:33.828 } 00:18:33.828 }, 00:18:33.828 { 00:18:33.828 "method": "sock_impl_set_options", 00:18:33.828 "params": { 00:18:33.828 "impl_name": "ssl", 00:18:33.828 "recv_buf_size": 4096, 00:18:33.828 "send_buf_size": 4096, 00:18:33.828 "enable_recv_pipe": true, 00:18:33.828 "enable_quickack": false, 00:18:33.828 "enable_placement_id": 0, 00:18:33.828 "enable_zerocopy_send_server": true, 00:18:33.828 "enable_zerocopy_send_client": false, 00:18:33.828 "zerocopy_threshold": 0, 00:18:33.828 "tls_version": 0, 00:18:33.828 "enable_ktls": false 00:18:33.828 } 00:18:33.828 }, 00:18:33.828 { 00:18:33.828 "method": "sock_impl_set_options", 00:18:33.828 "params": { 00:18:33.828 "impl_name": "posix", 00:18:33.828 "recv_buf_size": 2097152, 00:18:33.828 "send_buf_size": 2097152, 00:18:33.828 "enable_recv_pipe": true, 00:18:33.828 "enable_quickack": false, 00:18:33.828 "enable_placement_id": 0, 00:18:33.828 "enable_zerocopy_send_server": true, 00:18:33.828 "enable_zerocopy_send_client": false, 00:18:33.828 "zerocopy_threshold": 0, 00:18:33.828 "tls_version": 0, 00:18:33.828 "enable_ktls": false 00:18:33.828 } 00:18:33.828 } 00:18:33.828 ] 00:18:33.828 }, 00:18:33.828 { 00:18:33.828 "subsystem": "vmd", 00:18:33.828 "config": [] 00:18:33.828 }, 00:18:33.828 { 00:18:33.828 "subsystem": "accel", 00:18:33.828 "config": [ 00:18:33.828 { 00:18:33.828 "method": "accel_set_options", 00:18:33.828 "params": { 00:18:33.828 "small_cache_size": 128, 00:18:33.828 "large_cache_size": 16, 00:18:33.828 "task_count": 2048, 00:18:33.828 "sequence_count": 2048, 00:18:33.828 "buf_count": 2048 00:18:33.828 } 00:18:33.828 } 00:18:33.828 ] 00:18:33.828 }, 00:18:33.828 { 00:18:33.828 "subsystem": "bdev", 00:18:33.828 "config": [ 00:18:33.828 { 00:18:33.828 "method": "bdev_set_options", 00:18:33.828 "params": { 00:18:33.828 "bdev_io_pool_size": 65535, 00:18:33.828 "bdev_io_cache_size": 256, 00:18:33.828 "bdev_auto_examine": true, 00:18:33.828 "iobuf_small_cache_size": 128, 00:18:33.828 "iobuf_large_cache_size": 16 00:18:33.828 } 00:18:33.828 }, 00:18:33.828 { 00:18:33.828 "method": "bdev_raid_set_options", 00:18:33.828 "params": { 00:18:33.828 "process_window_size_kb": 1024, 00:18:33.829 "process_max_bandwidth_mb_sec": 0 00:18:33.829 } 00:18:33.829 }, 00:18:33.829 { 00:18:33.829 "method": "bdev_iscsi_set_options", 00:18:33.829 "params": { 00:18:33.829 "timeout_sec": 30 00:18:33.829 } 00:18:33.829 }, 00:18:33.829 { 00:18:33.829 "method": "bdev_nvme_set_options", 00:18:33.829 "params": { 00:18:33.829 "action_on_timeout": "none", 00:18:33.829 "timeout_us": 0, 00:18:33.829 "timeout_admin_us": 0, 00:18:33.829 "keep_alive_timeout_ms": 10000, 00:18:33.829 "arbitration_burst": 0, 00:18:33.829 "low_priority_weight": 0, 00:18:33.829 "medium_priority_weight": 0, 00:18:33.829 "high_priority_weight": 0, 00:18:33.829 "nvme_adminq_poll_period_us": 10000, 00:18:33.829 "nvme_ioq_poll_period_us": 0, 00:18:33.829 "io_queue_requests": 0, 00:18:33.829 "delay_cmd_submit": true, 00:18:33.829 "transport_retry_count": 4, 00:18:33.829 "bdev_retry_count": 3, 00:18:33.829 "transport_ack_timeout": 0, 00:18:33.829 "ctrlr_loss_timeout_sec": 0, 00:18:33.829 "reconnect_delay_sec": 0, 00:18:33.829 "fast_io_fail_timeout_sec": 0, 00:18:33.829 "disable_auto_failback": false, 00:18:33.829 "generate_uuids": false, 00:18:33.829 "transport_tos": 0, 00:18:33.829 "nvme_error_stat": false, 00:18:33.829 "rdma_srq_size": 0, 00:18:33.829 "io_path_stat": false, 00:18:33.829 "allow_accel_sequence": false, 00:18:33.829 "rdma_max_cq_size": 0, 00:18:33.829 "rdma_cm_event_timeout_ms": 0, 00:18:33.829 "dhchap_digests": [ 00:18:33.829 "sha256", 00:18:33.829 "sha384", 00:18:33.829 "sha512" 00:18:33.829 ], 00:18:33.829 "dhchap_dhgroups": [ 00:18:33.829 "null", 00:18:33.829 "ffdhe2048", 00:18:33.829 "ffdhe3072", 00:18:33.829 "ffdhe4096", 00:18:33.829 "ffdhe6144", 00:18:33.829 "ffdhe8192" 00:18:33.829 ] 00:18:33.829 } 00:18:33.829 }, 00:18:33.829 { 00:18:33.829 "method": "bdev_nvme_set_hotplug", 00:18:33.829 "params": { 00:18:33.829 "period_us": 100000, 00:18:33.829 "enable": false 00:18:33.829 } 00:18:33.829 }, 00:18:33.829 { 00:18:33.829 "method": "bdev_malloc_create", 00:18:33.829 "params": { 00:18:33.829 "name": "malloc0", 00:18:33.829 "num_blocks": 8192, 00:18:33.829 "block_size": 4096, 00:18:33.829 "physical_block_size": 4096, 00:18:33.829 "uuid": "b5a14b5b-9f64-4896-917e-67c01965a93e", 00:18:33.829 "optimal_io_boundary": 0, 00:18:33.829 "md_size": 0, 00:18:33.829 "dif_type": 0, 00:18:33.829 "dif_is_head_of_md": false, 00:18:33.829 "dif_pi_format": 0 00:18:33.829 } 00:18:33.829 }, 00:18:33.829 { 00:18:33.829 "method": "bdev_wait_for_examine" 00:18:33.829 } 00:18:33.829 ] 00:18:33.829 }, 00:18:33.829 { 00:18:33.829 "subsystem": "nbd", 00:18:33.829 "config": [] 00:18:33.829 }, 00:18:33.829 { 00:18:33.829 "subsystem": "scheduler", 00:18:33.829 "config": [ 00:18:33.829 { 00:18:33.829 "method": "framework_set_scheduler", 00:18:33.829 "params": { 00:18:33.829 "name": "static" 00:18:33.829 } 00:18:33.829 } 00:18:33.829 ] 00:18:33.829 }, 00:18:33.829 { 00:18:33.829 "subsystem": "nvmf", 00:18:33.829 "config": [ 00:18:33.829 { 00:18:33.829 "method": "nvmf_set_config", 00:18:33.829 "params": { 00:18:33.829 "discovery_filter": "match_any", 00:18:33.829 "admin_cmd_passthru": { 00:18:33.829 "identify_ctrlr": false 00:18:33.829 }, 00:18:33.829 "dhchap_digests": [ 00:18:33.829 "sha256", 00:18:33.829 "sha384", 00:18:33.829 "sha512" 00:18:33.829 ], 00:18:33.829 "dhchap_dhgroups": [ 00:18:33.829 "null", 00:18:33.829 "ffdhe2048", 00:18:33.829 "ffdhe3072", 00:18:33.829 "ffdhe4096", 00:18:33.829 "ffdhe6144", 00:18:33.829 "ffdhe8192" 00:18:33.829 ] 00:18:33.829 } 00:18:33.829 }, 00:18:33.829 { 00:18:33.829 "method": "nvmf_set_max_subsystems", 00:18:33.829 "params": { 00:18:33.829 "max_subsystems": 1024 00:18:33.829 } 00:18:33.829 }, 00:18:33.829 { 00:18:33.829 "method": "nvmf_set_crdt", 00:18:33.829 "params": { 00:18:33.829 "crdt1": 0, 00:18:33.829 "crdt2": 0, 00:18:33.829 "crdt3": 0 00:18:33.829 } 00:18:33.829 }, 00:18:33.829 { 00:18:33.829 "method": "nvmf_create_transport", 00:18:33.829 "params": { 00:18:33.829 "trtype": "TCP", 00:18:33.829 "max_queue_depth": 128, 00:18:33.829 "max_io_qpairs_per_ctrlr": 127, 00:18:33.829 "in_capsule_data_size": 4096, 00:18:33.829 "max_io_size": 131072, 00:18:33.829 "io_unit_size": 131072, 00:18:33.829 "max_aq_depth": 128, 00:18:33.829 "num_shared_buffers": 511, 00:18:33.829 "buf_cache_size": 4294967295, 00:18:33.829 "dif_insert_or_strip": false, 00:18:33.829 "zcopy": false, 00:18:33.829 "c2h_success": false, 00:18:33.829 "sock_priority": 0, 00:18:33.829 "abort_timeout_sec": 1, 00:18:33.829 "ack_timeout": 0, 00:18:33.829 "data_wr_pool_size": 0 00:18:33.829 } 00:18:33.829 }, 00:18:33.829 { 00:18:33.829 "method": "nvmf_create_subsystem", 00:18:33.829 "params": { 00:18:33.829 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.829 "allow_any_host": false, 00:18:33.829 "serial_number": "SPDK00000000000001", 00:18:33.829 "model_number": "SPDK bdev Controller", 00:18:33.829 "max_namespaces": 10, 00:18:33.829 "min_cntlid": 1, 00:18:33.829 "max_cntlid": 65519, 00:18:33.829 "ana_reporting": false 00:18:33.829 } 00:18:33.829 }, 00:18:33.829 { 00:18:33.829 "method": "nvmf_subsystem_add_host", 00:18:33.829 "params": { 00:18:33.829 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.829 "host": "nqn.2016-06.io.spdk:host1", 00:18:33.829 "psk": "key0" 00:18:33.829 } 00:18:33.829 }, 00:18:33.829 { 00:18:33.829 "method": "nvmf_subsystem_add_ns", 00:18:33.829 "params": { 00:18:33.829 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.829 "namespace": { 00:18:33.829 "nsid": 1, 00:18:33.829 "bdev_name": "malloc0", 00:18:33.829 "nguid": "B5A14B5B9F644896917E67C01965A93E", 00:18:33.829 "uuid": "b5a14b5b-9f64-4896-917e-67c01965a93e", 00:18:33.829 "no_auto_visible": false 00:18:33.829 } 00:18:33.829 } 00:18:33.829 }, 00:18:33.829 { 00:18:33.829 "method": "nvmf_subsystem_add_listener", 00:18:33.829 "params": { 00:18:33.829 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.829 "listen_address": { 00:18:33.829 "trtype": "TCP", 00:18:33.829 "adrfam": "IPv4", 00:18:33.829 "traddr": "10.0.0.2", 00:18:33.829 "trsvcid": "4420" 00:18:33.829 }, 00:18:33.829 "secure_channel": true 00:18:33.829 } 00:18:33.829 } 00:18:33.829 ] 00:18:33.829 } 00:18:33.829 ] 00:18:33.829 }' 00:18:33.829 07:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:34.088 07:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:18:34.088 "subsystems": [ 00:18:34.088 { 00:18:34.088 "subsystem": "keyring", 00:18:34.088 "config": [ 00:18:34.088 { 00:18:34.088 "method": "keyring_file_add_key", 00:18:34.088 "params": { 00:18:34.088 "name": "key0", 00:18:34.088 "path": "/tmp/tmp.lWWn6jfH0s" 00:18:34.088 } 00:18:34.088 } 00:18:34.088 ] 00:18:34.088 }, 00:18:34.088 { 00:18:34.088 "subsystem": "iobuf", 00:18:34.088 "config": [ 00:18:34.088 { 00:18:34.088 "method": "iobuf_set_options", 00:18:34.088 "params": { 00:18:34.088 "small_pool_count": 8192, 00:18:34.088 "large_pool_count": 1024, 00:18:34.088 "small_bufsize": 8192, 00:18:34.088 "large_bufsize": 135168, 00:18:34.088 "enable_numa": false 00:18:34.088 } 00:18:34.088 } 00:18:34.088 ] 00:18:34.088 }, 00:18:34.088 { 00:18:34.088 "subsystem": "sock", 00:18:34.088 "config": [ 00:18:34.088 { 00:18:34.088 "method": "sock_set_default_impl", 00:18:34.088 "params": { 00:18:34.088 "impl_name": "posix" 00:18:34.088 } 00:18:34.088 }, 00:18:34.088 { 00:18:34.088 "method": "sock_impl_set_options", 00:18:34.088 "params": { 00:18:34.088 "impl_name": "ssl", 00:18:34.088 "recv_buf_size": 4096, 00:18:34.088 "send_buf_size": 4096, 00:18:34.088 "enable_recv_pipe": true, 00:18:34.088 "enable_quickack": false, 00:18:34.088 "enable_placement_id": 0, 00:18:34.088 "enable_zerocopy_send_server": true, 00:18:34.088 "enable_zerocopy_send_client": false, 00:18:34.088 "zerocopy_threshold": 0, 00:18:34.088 "tls_version": 0, 00:18:34.088 "enable_ktls": false 00:18:34.088 } 00:18:34.088 }, 00:18:34.088 { 00:18:34.088 "method": "sock_impl_set_options", 00:18:34.088 "params": { 00:18:34.088 "impl_name": "posix", 00:18:34.088 "recv_buf_size": 2097152, 00:18:34.088 "send_buf_size": 2097152, 00:18:34.088 "enable_recv_pipe": true, 00:18:34.088 "enable_quickack": false, 00:18:34.088 "enable_placement_id": 0, 00:18:34.088 "enable_zerocopy_send_server": true, 00:18:34.088 "enable_zerocopy_send_client": false, 00:18:34.088 "zerocopy_threshold": 0, 00:18:34.088 "tls_version": 0, 00:18:34.088 "enable_ktls": false 00:18:34.088 } 00:18:34.088 } 00:18:34.088 ] 00:18:34.088 }, 00:18:34.088 { 00:18:34.088 "subsystem": "vmd", 00:18:34.088 "config": [] 00:18:34.088 }, 00:18:34.088 { 00:18:34.088 "subsystem": "accel", 00:18:34.088 "config": [ 00:18:34.088 { 00:18:34.088 "method": "accel_set_options", 00:18:34.088 "params": { 00:18:34.088 "small_cache_size": 128, 00:18:34.088 "large_cache_size": 16, 00:18:34.088 "task_count": 2048, 00:18:34.088 "sequence_count": 2048, 00:18:34.088 "buf_count": 2048 00:18:34.088 } 00:18:34.088 } 00:18:34.088 ] 00:18:34.088 }, 00:18:34.088 { 00:18:34.088 "subsystem": "bdev", 00:18:34.088 "config": [ 00:18:34.088 { 00:18:34.088 "method": "bdev_set_options", 00:18:34.088 "params": { 00:18:34.088 "bdev_io_pool_size": 65535, 00:18:34.088 "bdev_io_cache_size": 256, 00:18:34.088 "bdev_auto_examine": true, 00:18:34.088 "iobuf_small_cache_size": 128, 00:18:34.088 "iobuf_large_cache_size": 16 00:18:34.088 } 00:18:34.088 }, 00:18:34.088 { 00:18:34.088 "method": "bdev_raid_set_options", 00:18:34.088 "params": { 00:18:34.088 "process_window_size_kb": 1024, 00:18:34.088 "process_max_bandwidth_mb_sec": 0 00:18:34.088 } 00:18:34.088 }, 00:18:34.088 { 00:18:34.088 "method": "bdev_iscsi_set_options", 00:18:34.088 "params": { 00:18:34.088 "timeout_sec": 30 00:18:34.088 } 00:18:34.088 }, 00:18:34.088 { 00:18:34.088 "method": "bdev_nvme_set_options", 00:18:34.088 "params": { 00:18:34.088 "action_on_timeout": "none", 00:18:34.088 "timeout_us": 0, 00:18:34.088 "timeout_admin_us": 0, 00:18:34.088 "keep_alive_timeout_ms": 10000, 00:18:34.088 "arbitration_burst": 0, 00:18:34.088 "low_priority_weight": 0, 00:18:34.088 "medium_priority_weight": 0, 00:18:34.088 "high_priority_weight": 0, 00:18:34.088 "nvme_adminq_poll_period_us": 10000, 00:18:34.088 "nvme_ioq_poll_period_us": 0, 00:18:34.088 "io_queue_requests": 512, 00:18:34.088 "delay_cmd_submit": true, 00:18:34.088 "transport_retry_count": 4, 00:18:34.088 "bdev_retry_count": 3, 00:18:34.088 "transport_ack_timeout": 0, 00:18:34.088 "ctrlr_loss_timeout_sec": 0, 00:18:34.088 "reconnect_delay_sec": 0, 00:18:34.088 "fast_io_fail_timeout_sec": 0, 00:18:34.088 "disable_auto_failback": false, 00:18:34.088 "generate_uuids": false, 00:18:34.088 "transport_tos": 0, 00:18:34.088 "nvme_error_stat": false, 00:18:34.088 "rdma_srq_size": 0, 00:18:34.088 "io_path_stat": false, 00:18:34.088 "allow_accel_sequence": false, 00:18:34.088 "rdma_max_cq_size": 0, 00:18:34.088 "rdma_cm_event_timeout_ms": 0, 00:18:34.088 "dhchap_digests": [ 00:18:34.088 "sha256", 00:18:34.088 "sha384", 00:18:34.088 "sha512" 00:18:34.088 ], 00:18:34.088 "dhchap_dhgroups": [ 00:18:34.088 "null", 00:18:34.088 "ffdhe2048", 00:18:34.088 "ffdhe3072", 00:18:34.088 "ffdhe4096", 00:18:34.088 "ffdhe6144", 00:18:34.088 "ffdhe8192" 00:18:34.088 ] 00:18:34.088 } 00:18:34.088 }, 00:18:34.088 { 00:18:34.088 "method": "bdev_nvme_attach_controller", 00:18:34.088 "params": { 00:18:34.088 "name": "TLSTEST", 00:18:34.088 "trtype": "TCP", 00:18:34.088 "adrfam": "IPv4", 00:18:34.088 "traddr": "10.0.0.2", 00:18:34.088 "trsvcid": "4420", 00:18:34.088 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.088 "prchk_reftag": false, 00:18:34.088 "prchk_guard": false, 00:18:34.088 "ctrlr_loss_timeout_sec": 0, 00:18:34.088 "reconnect_delay_sec": 0, 00:18:34.088 "fast_io_fail_timeout_sec": 0, 00:18:34.088 "psk": "key0", 00:18:34.088 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:34.088 "hdgst": false, 00:18:34.088 "ddgst": false, 00:18:34.088 "multipath": "multipath" 00:18:34.088 } 00:18:34.088 }, 00:18:34.088 { 00:18:34.088 "method": "bdev_nvme_set_hotplug", 00:18:34.088 "params": { 00:18:34.088 "period_us": 100000, 00:18:34.088 "enable": false 00:18:34.088 } 00:18:34.088 }, 00:18:34.088 { 00:18:34.088 "method": "bdev_wait_for_examine" 00:18:34.088 } 00:18:34.088 ] 00:18:34.089 }, 00:18:34.089 { 00:18:34.089 "subsystem": "nbd", 00:18:34.089 "config": [] 00:18:34.089 } 00:18:34.089 ] 00:18:34.089 }' 00:18:34.089 07:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1211355 00:18:34.089 07:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1211355 ']' 00:18:34.089 07:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1211355 00:18:34.089 07:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:34.089 07:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:34.089 07:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1211355 00:18:34.089 07:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:34.089 07:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:34.089 07:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1211355' 00:18:34.089 killing process with pid 1211355 00:18:34.089 07:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1211355 00:18:34.089 Received shutdown signal, test time was about 10.000000 seconds 00:18:34.089 00:18:34.089 Latency(us) 00:18:34.089 [2024-11-20T06:14:38.645Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.089 [2024-11-20T06:14:38.645Z] =================================================================================================================== 00:18:34.089 [2024-11-20T06:14:38.645Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:34.089 07:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1211355 00:18:34.346 07:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1211098 00:18:34.346 07:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1211098 ']' 00:18:34.346 07:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1211098 00:18:34.346 07:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:34.346 07:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:34.346 07:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1211098 00:18:34.346 07:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:34.346 07:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:34.346 07:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1211098' 00:18:34.346 killing process with pid 1211098 00:18:34.346 07:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1211098 00:18:34.346 07:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1211098 00:18:34.605 07:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:34.605 07:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:34.605 07:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:34.605 07:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:18:34.605 "subsystems": [ 00:18:34.605 { 00:18:34.605 "subsystem": "keyring", 00:18:34.605 "config": [ 00:18:34.605 { 00:18:34.605 "method": "keyring_file_add_key", 00:18:34.605 "params": { 00:18:34.605 "name": "key0", 00:18:34.605 "path": "/tmp/tmp.lWWn6jfH0s" 00:18:34.605 } 00:18:34.605 } 00:18:34.605 ] 00:18:34.605 }, 00:18:34.605 { 00:18:34.605 "subsystem": "iobuf", 00:18:34.605 "config": [ 00:18:34.605 { 00:18:34.605 "method": "iobuf_set_options", 00:18:34.605 "params": { 00:18:34.605 "small_pool_count": 8192, 00:18:34.605 "large_pool_count": 1024, 00:18:34.605 "small_bufsize": 8192, 00:18:34.605 "large_bufsize": 135168, 00:18:34.605 "enable_numa": false 00:18:34.605 } 00:18:34.605 } 00:18:34.605 ] 00:18:34.605 }, 00:18:34.605 { 00:18:34.605 "subsystem": "sock", 00:18:34.605 "config": [ 00:18:34.605 { 00:18:34.605 "method": "sock_set_default_impl", 00:18:34.605 "params": { 00:18:34.605 "impl_name": "posix" 00:18:34.605 } 00:18:34.605 }, 00:18:34.605 { 00:18:34.605 "method": "sock_impl_set_options", 00:18:34.605 "params": { 00:18:34.605 "impl_name": "ssl", 00:18:34.605 "recv_buf_size": 4096, 00:18:34.605 "send_buf_size": 4096, 00:18:34.605 "enable_recv_pipe": true, 00:18:34.605 "enable_quickack": false, 00:18:34.605 "enable_placement_id": 0, 00:18:34.605 "enable_zerocopy_send_server": true, 00:18:34.605 "enable_zerocopy_send_client": false, 00:18:34.605 "zerocopy_threshold": 0, 00:18:34.605 "tls_version": 0, 00:18:34.605 "enable_ktls": false 00:18:34.605 } 00:18:34.605 }, 00:18:34.605 { 00:18:34.605 "method": "sock_impl_set_options", 00:18:34.605 "params": { 00:18:34.605 "impl_name": "posix", 00:18:34.605 "recv_buf_size": 2097152, 00:18:34.605 "send_buf_size": 2097152, 00:18:34.605 "enable_recv_pipe": true, 00:18:34.605 "enable_quickack": false, 00:18:34.605 "enable_placement_id": 0, 00:18:34.605 "enable_zerocopy_send_server": true, 00:18:34.605 "enable_zerocopy_send_client": false, 00:18:34.605 "zerocopy_threshold": 0, 00:18:34.605 "tls_version": 0, 00:18:34.605 "enable_ktls": false 00:18:34.605 } 00:18:34.605 } 00:18:34.605 ] 00:18:34.605 }, 00:18:34.605 { 00:18:34.605 "subsystem": "vmd", 00:18:34.605 "config": [] 00:18:34.605 }, 00:18:34.605 { 00:18:34.605 "subsystem": "accel", 00:18:34.605 "config": [ 00:18:34.605 { 00:18:34.605 "method": "accel_set_options", 00:18:34.605 "params": { 00:18:34.605 "small_cache_size": 128, 00:18:34.605 "large_cache_size": 16, 00:18:34.605 "task_count": 2048, 00:18:34.605 "sequence_count": 2048, 00:18:34.605 "buf_count": 2048 00:18:34.605 } 00:18:34.605 } 00:18:34.605 ] 00:18:34.605 }, 00:18:34.605 { 00:18:34.605 "subsystem": "bdev", 00:18:34.605 "config": [ 00:18:34.605 { 00:18:34.605 "method": "bdev_set_options", 00:18:34.605 "params": { 00:18:34.605 "bdev_io_pool_size": 65535, 00:18:34.605 "bdev_io_cache_size": 256, 00:18:34.605 "bdev_auto_examine": true, 00:18:34.605 "iobuf_small_cache_size": 128, 00:18:34.605 "iobuf_large_cache_size": 16 00:18:34.605 } 00:18:34.605 }, 00:18:34.605 { 00:18:34.605 "method": "bdev_raid_set_options", 00:18:34.605 "params": { 00:18:34.605 "process_window_size_kb": 1024, 00:18:34.605 "process_max_bandwidth_mb_sec": 0 00:18:34.605 } 00:18:34.605 }, 00:18:34.605 { 00:18:34.605 "method": "bdev_iscsi_set_options", 00:18:34.605 "params": { 00:18:34.605 "timeout_sec": 30 00:18:34.605 } 00:18:34.605 }, 00:18:34.605 { 00:18:34.605 "method": "bdev_nvme_set_options", 00:18:34.605 "params": { 00:18:34.605 "action_on_timeout": "none", 00:18:34.605 "timeout_us": 0, 00:18:34.605 "timeout_admin_us": 0, 00:18:34.605 "keep_alive_timeout_ms": 10000, 00:18:34.605 "arbitration_burst": 0, 00:18:34.605 "low_priority_weight": 0, 00:18:34.605 "medium_priority_weight": 0, 00:18:34.605 "high_priority_weight": 0, 00:18:34.605 "nvme_adminq_poll_period_us": 10000, 00:18:34.605 "nvme_ioq_poll_period_us": 0, 00:18:34.605 "io_queue_requests": 0, 00:18:34.605 "delay_cmd_submit": true, 00:18:34.605 "transport_retry_count": 4, 00:18:34.605 "bdev_retry_count": 3, 00:18:34.605 "transport_ack_timeout": 0, 00:18:34.605 "ctrlr_loss_timeout_sec": 0, 00:18:34.605 "reconnect_delay_sec": 0, 00:18:34.605 "fast_io_fail_timeout_sec": 0, 00:18:34.605 "disable_auto_failback": false, 00:18:34.605 "generate_uuids": false, 00:18:34.605 "transport_tos": 0, 00:18:34.605 "nvme_error_stat": false, 00:18:34.605 "rdma_srq_size": 0, 00:18:34.605 "io_path_stat": false, 00:18:34.605 "allow_accel_sequence": false, 00:18:34.605 "rdma_max_cq_size": 0, 00:18:34.605 "rdma_cm_event_timeout_ms": 0, 00:18:34.605 07:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.605 "dhchap_digests": [ 00:18:34.605 "sha256", 00:18:34.605 "sha384", 00:18:34.605 "sha512" 00:18:34.605 ], 00:18:34.605 "dhchap_dhgroups": [ 00:18:34.605 "null", 00:18:34.605 "ffdhe2048", 00:18:34.605 "ffdhe3072", 00:18:34.605 "ffdhe4096", 00:18:34.605 "ffdhe6144", 00:18:34.605 "ffdhe8192" 00:18:34.605 ] 00:18:34.605 } 00:18:34.605 }, 00:18:34.605 { 00:18:34.605 "method": "bdev_nvme_set_hotplug", 00:18:34.606 "params": { 00:18:34.606 "period_us": 100000, 00:18:34.606 "enable": false 00:18:34.606 } 00:18:34.606 }, 00:18:34.606 { 00:18:34.606 "method": "bdev_malloc_create", 00:18:34.606 "params": { 00:18:34.606 "name": "malloc0", 00:18:34.606 "num_blocks": 8192, 00:18:34.606 "block_size": 4096, 00:18:34.606 "physical_block_size": 4096, 00:18:34.606 "uuid": "b5a14b5b-9f64-4896-917e-67c01965a93e", 00:18:34.606 "optimal_io_boundary": 0, 00:18:34.606 "md_size": 0, 00:18:34.606 "dif_type": 0, 00:18:34.606 "dif_is_head_of_md": false, 00:18:34.606 "dif_pi_format": 0 00:18:34.606 } 00:18:34.606 }, 00:18:34.606 { 00:18:34.606 "method": "bdev_wait_for_examine" 00:18:34.606 } 00:18:34.606 ] 00:18:34.606 }, 00:18:34.606 { 00:18:34.606 "subsystem": "nbd", 00:18:34.606 "config": [] 00:18:34.606 }, 00:18:34.606 { 00:18:34.606 "subsystem": "scheduler", 00:18:34.606 "config": [ 00:18:34.606 { 00:18:34.606 "method": "framework_set_scheduler", 00:18:34.606 "params": { 00:18:34.606 "name": "static" 00:18:34.606 } 00:18:34.606 } 00:18:34.606 ] 00:18:34.606 }, 00:18:34.606 { 00:18:34.606 "subsystem": "nvmf", 00:18:34.606 "config": [ 00:18:34.606 { 00:18:34.606 "method": "nvmf_set_config", 00:18:34.606 "params": { 00:18:34.606 "discovery_filter": "match_any", 00:18:34.606 "admin_cmd_passthru": { 00:18:34.606 "identify_ctrlr": false 00:18:34.606 }, 00:18:34.606 "dhchap_digests": [ 00:18:34.606 "sha256", 00:18:34.606 "sha384", 00:18:34.606 "sha512" 00:18:34.606 ], 00:18:34.606 "dhchap_dhgroups": [ 00:18:34.606 "null", 00:18:34.606 "ffdhe2048", 00:18:34.606 "ffdhe3072", 00:18:34.606 "ffdhe4096", 00:18:34.606 "ffdhe6144", 00:18:34.606 "ffdhe8192" 00:18:34.606 ] 00:18:34.606 } 00:18:34.606 }, 00:18:34.606 { 00:18:34.606 "method": "nvmf_set_max_subsystems", 00:18:34.606 "params": { 00:18:34.606 "max_subsystems": 1024 00:18:34.606 } 00:18:34.606 }, 00:18:34.606 { 00:18:34.606 "method": "nvmf_set_crdt", 00:18:34.606 "params": { 00:18:34.606 "crdt1": 0, 00:18:34.606 "crdt2": 0, 00:18:34.606 "crdt3": 0 00:18:34.606 } 00:18:34.606 }, 00:18:34.606 { 00:18:34.606 "method": "nvmf_create_transport", 00:18:34.606 "params": { 00:18:34.606 "trtype": "TCP", 00:18:34.606 "max_queue_depth": 128, 00:18:34.606 "max_io_qpairs_per_ctrlr": 127, 00:18:34.606 "in_capsule_data_size": 4096, 00:18:34.606 "max_io_size": 131072, 00:18:34.606 "io_unit_size": 131072, 00:18:34.606 "max_aq_depth": 128, 00:18:34.606 "num_shared_buffers": 511, 00:18:34.606 "buf_cache_size": 4294967295, 00:18:34.606 "dif_insert_or_strip": false, 00:18:34.606 "zcopy": false, 00:18:34.606 "c2h_success": false, 00:18:34.606 "sock_priority": 0, 00:18:34.606 "abort_timeout_sec": 1, 00:18:34.606 "ack_timeout": 0, 00:18:34.606 "data_wr_pool_size": 0 00:18:34.606 } 00:18:34.606 }, 00:18:34.606 { 00:18:34.606 "method": "nvmf_create_subsystem", 00:18:34.606 "params": { 00:18:34.606 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.606 "allow_any_host": false, 00:18:34.606 "serial_number": "SPDK00000000000001", 00:18:34.606 "model_number": "SPDK bdev Controller", 00:18:34.606 "max_namespaces": 10, 00:18:34.606 "min_cntlid": 1, 00:18:34.606 "max_cntlid": 65519, 00:18:34.606 "ana_reporting": false 00:18:34.606 } 00:18:34.606 }, 00:18:34.606 { 00:18:34.606 "method": "nvmf_subsystem_add_host", 00:18:34.606 "params": { 00:18:34.606 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.606 "host": "nqn.2016-06.io.spdk:host1", 00:18:34.606 "psk": "key0" 00:18:34.606 } 00:18:34.606 }, 00:18:34.606 { 00:18:34.606 "method": "nvmf_subsystem_add_ns", 00:18:34.606 "params": { 00:18:34.606 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.606 "namespace": { 00:18:34.606 "nsid": 1, 00:18:34.606 "bdev_name": "malloc0", 00:18:34.606 "nguid": "B5A14B5B9F644896917E67C01965A93E", 00:18:34.606 "uuid": "b5a14b5b-9f64-4896-917e-67c01965a93e", 00:18:34.606 "no_auto_visible": false 00:18:34.606 } 00:18:34.606 } 00:18:34.606 }, 00:18:34.606 { 00:18:34.606 "method": "nvmf_subsystem_add_listener", 00:18:34.606 "params": { 00:18:34.606 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.606 "listen_address": { 00:18:34.606 "trtype": "TCP", 00:18:34.606 "adrfam": "IPv4", 00:18:34.606 "traddr": "10.0.0.2", 00:18:34.606 "trsvcid": "4420" 00:18:34.606 }, 00:18:34.606 "secure_channel": true 00:18:34.606 } 00:18:34.606 } 00:18:34.606 ] 00:18:34.606 } 00:18:34.606 ] 00:18:34.606 }' 00:18:34.606 07:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1211606 00:18:34.606 07:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1211606 00:18:34.606 07:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:34.606 07:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1211606 ']' 00:18:34.606 07:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.606 07:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:34.606 07:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.606 07:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:34.606 07:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.606 [2024-11-20 07:14:39.064488] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:18:34.606 [2024-11-20 07:14:39.064533] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:34.606 [2024-11-20 07:14:39.142987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.871 [2024-11-20 07:14:39.184461] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:34.871 [2024-11-20 07:14:39.184493] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:34.871 [2024-11-20 07:14:39.184501] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:34.871 [2024-11-20 07:14:39.184507] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:34.871 [2024-11-20 07:14:39.184512] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:34.871 [2024-11-20 07:14:39.185078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:34.871 [2024-11-20 07:14:39.399794] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:35.127 [2024-11-20 07:14:39.431820] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:35.127 [2024-11-20 07:14:39.432038] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:35.385 07:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:35.385 07:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:35.385 07:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:35.385 07:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:35.385 07:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:35.385 07:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:35.385 07:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1211851 00:18:35.385 07:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1211851 /var/tmp/bdevperf.sock 00:18:35.385 07:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1211851 ']' 00:18:35.645 07:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:35.645 07:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:35.645 07:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:35.645 07:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:35.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:35.645 07:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:35.645 07:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:18:35.645 "subsystems": [ 00:18:35.645 { 00:18:35.645 "subsystem": "keyring", 00:18:35.645 "config": [ 00:18:35.645 { 00:18:35.645 "method": "keyring_file_add_key", 00:18:35.645 "params": { 00:18:35.645 "name": "key0", 00:18:35.645 "path": "/tmp/tmp.lWWn6jfH0s" 00:18:35.645 } 00:18:35.645 } 00:18:35.645 ] 00:18:35.645 }, 00:18:35.645 { 00:18:35.645 "subsystem": "iobuf", 00:18:35.645 "config": [ 00:18:35.645 { 00:18:35.645 "method": "iobuf_set_options", 00:18:35.645 "params": { 00:18:35.645 "small_pool_count": 8192, 00:18:35.645 "large_pool_count": 1024, 00:18:35.645 "small_bufsize": 8192, 00:18:35.645 "large_bufsize": 135168, 00:18:35.645 "enable_numa": false 00:18:35.645 } 00:18:35.645 } 00:18:35.645 ] 00:18:35.645 }, 00:18:35.645 { 00:18:35.645 "subsystem": "sock", 00:18:35.645 "config": [ 00:18:35.645 { 00:18:35.645 "method": "sock_set_default_impl", 00:18:35.645 "params": { 00:18:35.645 "impl_name": "posix" 00:18:35.645 } 00:18:35.645 }, 00:18:35.645 { 00:18:35.645 "method": "sock_impl_set_options", 00:18:35.645 "params": { 00:18:35.645 "impl_name": "ssl", 00:18:35.645 "recv_buf_size": 4096, 00:18:35.645 "send_buf_size": 4096, 00:18:35.645 "enable_recv_pipe": true, 00:18:35.645 "enable_quickack": false, 00:18:35.645 "enable_placement_id": 0, 00:18:35.645 "enable_zerocopy_send_server": true, 00:18:35.645 "enable_zerocopy_send_client": false, 00:18:35.645 "zerocopy_threshold": 0, 00:18:35.645 "tls_version": 0, 00:18:35.645 "enable_ktls": false 00:18:35.645 } 00:18:35.645 }, 00:18:35.645 { 00:18:35.645 "method": "sock_impl_set_options", 00:18:35.645 "params": { 00:18:35.645 "impl_name": "posix", 00:18:35.645 "recv_buf_size": 2097152, 00:18:35.645 "send_buf_size": 2097152, 00:18:35.645 "enable_recv_pipe": true, 00:18:35.645 "enable_quickack": false, 00:18:35.645 "enable_placement_id": 0, 00:18:35.645 "enable_zerocopy_send_server": true, 00:18:35.645 "enable_zerocopy_send_client": false, 00:18:35.645 "zerocopy_threshold": 0, 00:18:35.645 "tls_version": 0, 00:18:35.645 "enable_ktls": false 00:18:35.645 } 00:18:35.645 } 00:18:35.645 ] 00:18:35.645 }, 00:18:35.645 { 00:18:35.645 "subsystem": "vmd", 00:18:35.645 "config": [] 00:18:35.645 }, 00:18:35.645 { 00:18:35.645 "subsystem": "accel", 00:18:35.645 "config": [ 00:18:35.645 { 00:18:35.645 "method": "accel_set_options", 00:18:35.645 "params": { 00:18:35.645 "small_cache_size": 128, 00:18:35.645 "large_cache_size": 16, 00:18:35.645 "task_count": 2048, 00:18:35.645 "sequence_count": 2048, 00:18:35.645 "buf_count": 2048 00:18:35.645 } 00:18:35.645 } 00:18:35.645 ] 00:18:35.645 }, 00:18:35.645 { 00:18:35.645 "subsystem": "bdev", 00:18:35.645 "config": [ 00:18:35.646 { 00:18:35.646 "method": "bdev_set_options", 00:18:35.646 "params": { 00:18:35.646 "bdev_io_pool_size": 65535, 00:18:35.646 "bdev_io_cache_size": 256, 00:18:35.646 "bdev_auto_examine": true, 00:18:35.646 "iobuf_small_cache_size": 128, 00:18:35.646 "iobuf_large_cache_size": 16 00:18:35.646 } 00:18:35.646 }, 00:18:35.646 { 00:18:35.646 "method": "bdev_raid_set_options", 00:18:35.646 "params": { 00:18:35.646 "process_window_size_kb": 1024, 00:18:35.646 "process_max_bandwidth_mb_sec": 0 00:18:35.646 } 00:18:35.646 }, 00:18:35.646 { 00:18:35.646 "method": "bdev_iscsi_set_options", 00:18:35.646 "params": { 00:18:35.646 "timeout_sec": 30 00:18:35.646 } 00:18:35.646 }, 00:18:35.646 { 00:18:35.646 "method": "bdev_nvme_set_options", 00:18:35.646 "params": { 00:18:35.646 "action_on_timeout": "none", 00:18:35.646 "timeout_us": 0, 00:18:35.646 "timeout_admin_us": 0, 00:18:35.646 "keep_alive_timeout_ms": 10000, 00:18:35.646 "arbitration_burst": 0, 00:18:35.646 "low_priority_weight": 0, 00:18:35.646 "medium_priority_weight": 0, 00:18:35.646 "high_priority_weight": 0, 00:18:35.646 "nvme_adminq_poll_period_us": 10000, 00:18:35.646 "nvme_ioq_poll_period_us": 0, 00:18:35.646 "io_queue_requests": 512, 00:18:35.646 "delay_cmd_submit": true, 00:18:35.646 "transport_retry_count": 4, 00:18:35.646 "bdev_retry_count": 3, 00:18:35.646 "transport_ack_timeout": 0, 00:18:35.646 "ctrlr_loss_timeout_sec": 0, 00:18:35.646 "reconnect_delay_sec": 0, 00:18:35.646 "fast_io_fail_timeout_sec": 0, 00:18:35.646 "disable_auto_failback": false, 00:18:35.646 "generate_uuids": false, 00:18:35.646 "transport_tos": 0, 00:18:35.646 "nvme_error_stat": false, 00:18:35.646 "rdma_srq_size": 0, 00:18:35.646 "io_path_stat": false, 00:18:35.646 "allow_accel_sequence": false, 00:18:35.646 "rdma_max_cq_size": 0, 00:18:35.646 "rdma_cm_event_timeout_ms": 0, 00:18:35.646 "dhchap_digests": [ 00:18:35.646 "sha256", 00:18:35.646 "sha384", 00:18:35.646 "sha512" 00:18:35.646 ], 00:18:35.646 "dhchap_dhgroups": [ 00:18:35.646 "null", 00:18:35.646 "ffdhe2048", 00:18:35.646 "ffdhe3072", 00:18:35.646 "ffdhe4096", 00:18:35.646 "ffdhe6144", 00:18:35.646 "ffdhe8192" 00:18:35.646 ] 00:18:35.646 } 00:18:35.646 }, 00:18:35.646 { 00:18:35.646 "method": "bdev_nvme_attach_controller", 00:18:35.646 "params": { 00:18:35.646 "name": "TLSTEST", 00:18:35.646 "trtype": "TCP", 00:18:35.646 "adrfam": "IPv4", 00:18:35.646 "traddr": "10.0.0.2", 00:18:35.646 "trsvcid": "4420", 00:18:35.646 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:35.646 "prchk_reftag": false, 00:18:35.646 "prchk_guard": false, 00:18:35.646 "ctrlr_loss_timeout_sec": 0, 00:18:35.646 "reconnect_delay_sec": 0, 00:18:35.646 "fast_io_fail_timeout_sec": 0, 00:18:35.646 "psk": "key0", 00:18:35.646 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:35.646 "hdgst": false, 00:18:35.646 "ddgst": false, 00:18:35.646 "multipath": "multipath" 00:18:35.646 } 00:18:35.646 }, 00:18:35.646 { 00:18:35.646 "method": "bdev_nvme_set_hotplug", 00:18:35.646 "params": { 00:18:35.646 "period_us": 100000, 00:18:35.646 "enable": false 00:18:35.646 } 00:18:35.646 }, 00:18:35.646 { 00:18:35.646 "method": "bdev_wait_for_examine" 00:18:35.646 } 00:18:35.646 ] 00:18:35.646 }, 00:18:35.646 { 00:18:35.646 "subsystem": "nbd", 00:18:35.646 "config": [] 00:18:35.646 } 00:18:35.646 ] 00:18:35.646 }' 00:18:35.646 07:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:35.646 [2024-11-20 07:14:39.981360] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:18:35.646 [2024-11-20 07:14:39.981407] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1211851 ] 00:18:35.646 [2024-11-20 07:14:40.057529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.646 [2024-11-20 07:14:40.101715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:35.905 [2024-11-20 07:14:40.254988] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:36.472 07:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:36.472 07:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:36.472 07:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:36.473 Running I/O for 10 seconds... 00:18:38.786 5381.00 IOPS, 21.02 MiB/s [2024-11-20T06:14:44.276Z] 5467.50 IOPS, 21.36 MiB/s [2024-11-20T06:14:45.246Z] 5483.67 IOPS, 21.42 MiB/s [2024-11-20T06:14:46.180Z] 5473.00 IOPS, 21.38 MiB/s [2024-11-20T06:14:47.116Z] 5486.60 IOPS, 21.43 MiB/s [2024-11-20T06:14:48.052Z] 5478.00 IOPS, 21.40 MiB/s [2024-11-20T06:14:49.039Z] 5483.71 IOPS, 21.42 MiB/s [2024-11-20T06:14:50.142Z] 5475.62 IOPS, 21.39 MiB/s [2024-11-20T06:14:51.096Z] 5474.33 IOPS, 21.38 MiB/s [2024-11-20T06:14:51.096Z] 5474.80 IOPS, 21.39 MiB/s 00:18:46.540 Latency(us) 00:18:46.540 [2024-11-20T06:14:51.096Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.540 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:46.540 Verification LBA range: start 0x0 length 0x2000 00:18:46.540 TLSTESTn1 : 10.03 5473.41 21.38 0.00 0.00 23343.31 5014.93 25074.64 00:18:46.540 [2024-11-20T06:14:51.096Z] =================================================================================================================== 00:18:46.540 [2024-11-20T06:14:51.096Z] Total : 5473.41 21.38 0.00 0.00 23343.31 5014.93 25074.64 00:18:46.540 { 00:18:46.540 "results": [ 00:18:46.540 { 00:18:46.540 "job": "TLSTESTn1", 00:18:46.540 "core_mask": "0x4", 00:18:46.540 "workload": "verify", 00:18:46.540 "status": "finished", 00:18:46.540 "verify_range": { 00:18:46.540 "start": 0, 00:18:46.540 "length": 8192 00:18:46.540 }, 00:18:46.540 "queue_depth": 128, 00:18:46.540 "io_size": 4096, 00:18:46.540 "runtime": 10.02538, 00:18:46.540 "iops": 5473.408489254272, 00:18:46.540 "mibps": 21.3805019111495, 00:18:46.540 "io_failed": 0, 00:18:46.540 "io_timeout": 0, 00:18:46.540 "avg_latency_us": 23343.307355942063, 00:18:46.540 "min_latency_us": 5014.928695652174, 00:18:46.540 "max_latency_us": 25074.64347826087 00:18:46.540 } 00:18:46.540 ], 00:18:46.540 "core_count": 1 00:18:46.540 } 00:18:46.540 07:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:46.540 07:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1211851 00:18:46.540 07:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1211851 ']' 00:18:46.540 07:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1211851 00:18:46.540 07:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:46.540 07:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:46.540 07:14:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1211851 00:18:46.540 07:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:46.540 07:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:46.540 07:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1211851' 00:18:46.540 killing process with pid 1211851 00:18:46.540 07:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1211851 00:18:46.540 Received shutdown signal, test time was about 10.000000 seconds 00:18:46.540 00:18:46.540 Latency(us) 00:18:46.540 [2024-11-20T06:14:51.096Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.540 [2024-11-20T06:14:51.096Z] =================================================================================================================== 00:18:46.540 [2024-11-20T06:14:51.096Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:46.540 07:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1211851 00:18:46.799 07:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1211606 00:18:46.799 07:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1211606 ']' 00:18:46.799 07:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1211606 00:18:46.799 07:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:46.799 07:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:46.799 07:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1211606 00:18:46.799 07:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:46.799 07:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:46.799 07:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1211606' 00:18:46.799 killing process with pid 1211606 00:18:46.799 07:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1211606 00:18:46.799 07:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1211606 00:18:47.058 07:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:18:47.058 07:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:47.058 07:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:47.058 07:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.058 07:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1213701 00:18:47.058 07:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:47.058 07:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1213701 00:18:47.058 07:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1213701 ']' 00:18:47.058 07:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.058 07:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:47.058 07:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.058 07:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:47.058 07:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.058 [2024-11-20 07:14:51.474764] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:18:47.058 [2024-11-20 07:14:51.474811] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:47.058 [2024-11-20 07:14:51.551927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.058 [2024-11-20 07:14:51.592867] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:47.058 [2024-11-20 07:14:51.592904] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:47.058 [2024-11-20 07:14:51.592911] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:47.058 [2024-11-20 07:14:51.592917] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:47.058 [2024-11-20 07:14:51.592922] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:47.058 [2024-11-20 07:14:51.593502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.317 07:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:47.317 07:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:47.317 07:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:47.317 07:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:47.317 07:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.317 07:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:47.317 07:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.lWWn6jfH0s 00:18:47.317 07:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.lWWn6jfH0s 00:18:47.317 07:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:47.576 [2024-11-20 07:14:51.899196] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:47.576 07:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:47.834 07:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:47.834 [2024-11-20 07:14:52.308241] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:47.834 [2024-11-20 07:14:52.308434] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:47.834 07:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:48.092 malloc0 00:18:48.092 07:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:48.350 07:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.lWWn6jfH0s 00:18:48.609 07:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:48.609 07:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:48.609 07:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1213962 00:18:48.609 07:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:48.609 07:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1213962 /var/tmp/bdevperf.sock 00:18:48.609 07:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1213962 ']' 00:18:48.609 07:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:48.609 07:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:48.609 07:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:48.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:48.609 07:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:48.609 07:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.868 [2024-11-20 07:14:53.177710] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:18:48.868 [2024-11-20 07:14:53.177759] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1213962 ] 00:18:48.868 [2024-11-20 07:14:53.250437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.868 [2024-11-20 07:14:53.291501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.868 07:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:48.868 07:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:48.868 07:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lWWn6jfH0s 00:18:49.127 07:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:49.385 [2024-11-20 07:14:53.757665] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:49.385 nvme0n1 00:18:49.385 07:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:49.643 Running I/O for 1 seconds... 00:18:50.579 5229.00 IOPS, 20.43 MiB/s 00:18:50.579 Latency(us) 00:18:50.579 [2024-11-20T06:14:55.136Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.580 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:50.580 Verification LBA range: start 0x0 length 0x2000 00:18:50.580 nvme0n1 : 1.01 5279.74 20.62 0.00 0.00 24050.14 7123.48 38295.82 00:18:50.580 [2024-11-20T06:14:55.136Z] =================================================================================================================== 00:18:50.580 [2024-11-20T06:14:55.136Z] Total : 5279.74 20.62 0.00 0.00 24050.14 7123.48 38295.82 00:18:50.580 { 00:18:50.580 "results": [ 00:18:50.580 { 00:18:50.580 "job": "nvme0n1", 00:18:50.580 "core_mask": "0x2", 00:18:50.580 "workload": "verify", 00:18:50.580 "status": "finished", 00:18:50.580 "verify_range": { 00:18:50.580 "start": 0, 00:18:50.580 "length": 8192 00:18:50.580 }, 00:18:50.580 "queue_depth": 128, 00:18:50.580 "io_size": 4096, 00:18:50.580 "runtime": 1.014633, 00:18:50.580 "iops": 5279.741542015685, 00:18:50.580 "mibps": 20.62399039849877, 00:18:50.580 "io_failed": 0, 00:18:50.580 "io_timeout": 0, 00:18:50.580 "avg_latency_us": 24050.144489047245, 00:18:50.580 "min_latency_us": 7123.478260869565, 00:18:50.580 "max_latency_us": 38295.819130434786 00:18:50.580 } 00:18:50.580 ], 00:18:50.580 "core_count": 1 00:18:50.580 } 00:18:50.580 07:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1213962 00:18:50.580 07:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1213962 ']' 00:18:50.580 07:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1213962 00:18:50.580 07:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:50.580 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:50.580 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1213962 00:18:50.580 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:50.580 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:50.580 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1213962' 00:18:50.580 killing process with pid 1213962 00:18:50.580 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1213962 00:18:50.580 Received shutdown signal, test time was about 1.000000 seconds 00:18:50.580 00:18:50.580 Latency(us) 00:18:50.580 [2024-11-20T06:14:55.136Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.580 [2024-11-20T06:14:55.136Z] =================================================================================================================== 00:18:50.580 [2024-11-20T06:14:55.136Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:50.580 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1213962 00:18:50.838 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1213701 00:18:50.838 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1213701 ']' 00:18:50.838 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1213701 00:18:50.838 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:50.838 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:50.838 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1213701 00:18:50.838 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:50.838 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:50.838 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1213701' 00:18:50.838 killing process with pid 1213701 00:18:50.838 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1213701 00:18:50.838 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1213701 00:18:51.097 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:18:51.097 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:51.097 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:51.097 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:51.097 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:51.097 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1214426 00:18:51.097 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1214426 00:18:51.097 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1214426 ']' 00:18:51.097 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.097 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:51.097 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.097 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:51.097 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:51.097 [2024-11-20 07:14:55.471510] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:18:51.097 [2024-11-20 07:14:55.471558] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:51.097 [2024-11-20 07:14:55.551810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.097 [2024-11-20 07:14:55.591757] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:51.097 [2024-11-20 07:14:55.591789] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:51.097 [2024-11-20 07:14:55.591796] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:51.097 [2024-11-20 07:14:55.591802] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:51.097 [2024-11-20 07:14:55.591807] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:51.097 [2024-11-20 07:14:55.592399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.356 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:51.356 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:51.356 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:51.356 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:51.356 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:51.356 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:51.356 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:18:51.356 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.356 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:51.356 [2024-11-20 07:14:55.737370] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:51.356 malloc0 00:18:51.356 [2024-11-20 07:14:55.765545] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:51.356 [2024-11-20 07:14:55.765744] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:51.356 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.356 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1214448 00:18:51.356 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1214448 /var/tmp/bdevperf.sock 00:18:51.356 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:51.356 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1214448 ']' 00:18:51.356 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:51.356 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:51.356 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:51.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:51.357 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:51.357 07:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:51.357 [2024-11-20 07:14:55.841718] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:18:51.357 [2024-11-20 07:14:55.841758] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1214448 ] 00:18:51.615 [2024-11-20 07:14:55.915876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.615 [2024-11-20 07:14:55.956781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:51.615 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:51.615 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:51.615 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lWWn6jfH0s 00:18:51.874 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:52.132 [2024-11-20 07:14:56.426429] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:52.132 nvme0n1 00:18:52.132 07:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:52.132 Running I/O for 1 seconds... 00:18:53.326 5138.00 IOPS, 20.07 MiB/s 00:18:53.326 Latency(us) 00:18:53.326 [2024-11-20T06:14:57.882Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.326 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:53.326 Verification LBA range: start 0x0 length 0x2000 00:18:53.326 nvme0n1 : 1.02 5167.02 20.18 0.00 0.00 24555.08 6154.69 25416.57 00:18:53.326 [2024-11-20T06:14:57.882Z] =================================================================================================================== 00:18:53.326 [2024-11-20T06:14:57.882Z] Total : 5167.02 20.18 0.00 0.00 24555.08 6154.69 25416.57 00:18:53.326 { 00:18:53.326 "results": [ 00:18:53.326 { 00:18:53.326 "job": "nvme0n1", 00:18:53.326 "core_mask": "0x2", 00:18:53.326 "workload": "verify", 00:18:53.326 "status": "finished", 00:18:53.326 "verify_range": { 00:18:53.326 "start": 0, 00:18:53.326 "length": 8192 00:18:53.326 }, 00:18:53.326 "queue_depth": 128, 00:18:53.326 "io_size": 4096, 00:18:53.326 "runtime": 1.019156, 00:18:53.326 "iops": 5167.020554262546, 00:18:53.326 "mibps": 20.18367404008807, 00:18:53.326 "io_failed": 0, 00:18:53.326 "io_timeout": 0, 00:18:53.326 "avg_latency_us": 24555.07596624779, 00:18:53.326 "min_latency_us": 6154.685217391304, 00:18:53.326 "max_latency_us": 25416.57043478261 00:18:53.326 } 00:18:53.326 ], 00:18:53.326 "core_count": 1 00:18:53.326 } 00:18:53.326 07:14:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:18:53.326 07:14:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.326 07:14:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.326 07:14:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.326 07:14:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:18:53.326 "subsystems": [ 00:18:53.326 { 00:18:53.326 "subsystem": "keyring", 00:18:53.326 "config": [ 00:18:53.326 { 00:18:53.326 "method": "keyring_file_add_key", 00:18:53.326 "params": { 00:18:53.326 "name": "key0", 00:18:53.326 "path": "/tmp/tmp.lWWn6jfH0s" 00:18:53.326 } 00:18:53.326 } 00:18:53.326 ] 00:18:53.326 }, 00:18:53.326 { 00:18:53.326 "subsystem": "iobuf", 00:18:53.326 "config": [ 00:18:53.326 { 00:18:53.326 "method": "iobuf_set_options", 00:18:53.326 "params": { 00:18:53.326 "small_pool_count": 8192, 00:18:53.326 "large_pool_count": 1024, 00:18:53.326 "small_bufsize": 8192, 00:18:53.326 "large_bufsize": 135168, 00:18:53.326 "enable_numa": false 00:18:53.326 } 00:18:53.326 } 00:18:53.326 ] 00:18:53.326 }, 00:18:53.326 { 00:18:53.326 "subsystem": "sock", 00:18:53.326 "config": [ 00:18:53.326 { 00:18:53.326 "method": "sock_set_default_impl", 00:18:53.326 "params": { 00:18:53.326 "impl_name": "posix" 00:18:53.326 } 00:18:53.326 }, 00:18:53.326 { 00:18:53.326 "method": "sock_impl_set_options", 00:18:53.326 "params": { 00:18:53.326 "impl_name": "ssl", 00:18:53.326 "recv_buf_size": 4096, 00:18:53.326 "send_buf_size": 4096, 00:18:53.326 "enable_recv_pipe": true, 00:18:53.326 "enable_quickack": false, 00:18:53.326 "enable_placement_id": 0, 00:18:53.326 "enable_zerocopy_send_server": true, 00:18:53.326 "enable_zerocopy_send_client": false, 00:18:53.326 "zerocopy_threshold": 0, 00:18:53.326 "tls_version": 0, 00:18:53.326 "enable_ktls": false 00:18:53.326 } 00:18:53.326 }, 00:18:53.326 { 00:18:53.326 "method": "sock_impl_set_options", 00:18:53.326 "params": { 00:18:53.326 "impl_name": "posix", 00:18:53.326 "recv_buf_size": 2097152, 00:18:53.326 "send_buf_size": 2097152, 00:18:53.326 "enable_recv_pipe": true, 00:18:53.326 "enable_quickack": false, 00:18:53.326 "enable_placement_id": 0, 00:18:53.326 "enable_zerocopy_send_server": true, 00:18:53.326 "enable_zerocopy_send_client": false, 00:18:53.326 "zerocopy_threshold": 0, 00:18:53.326 "tls_version": 0, 00:18:53.326 "enable_ktls": false 00:18:53.326 } 00:18:53.326 } 00:18:53.326 ] 00:18:53.326 }, 00:18:53.326 { 00:18:53.326 "subsystem": "vmd", 00:18:53.326 "config": [] 00:18:53.326 }, 00:18:53.326 { 00:18:53.326 "subsystem": "accel", 00:18:53.326 "config": [ 00:18:53.326 { 00:18:53.326 "method": "accel_set_options", 00:18:53.326 "params": { 00:18:53.326 "small_cache_size": 128, 00:18:53.326 "large_cache_size": 16, 00:18:53.326 "task_count": 2048, 00:18:53.326 "sequence_count": 2048, 00:18:53.326 "buf_count": 2048 00:18:53.326 } 00:18:53.326 } 00:18:53.326 ] 00:18:53.326 }, 00:18:53.326 { 00:18:53.326 "subsystem": "bdev", 00:18:53.326 "config": [ 00:18:53.326 { 00:18:53.326 "method": "bdev_set_options", 00:18:53.326 "params": { 00:18:53.326 "bdev_io_pool_size": 65535, 00:18:53.326 "bdev_io_cache_size": 256, 00:18:53.326 "bdev_auto_examine": true, 00:18:53.326 "iobuf_small_cache_size": 128, 00:18:53.326 "iobuf_large_cache_size": 16 00:18:53.326 } 00:18:53.326 }, 00:18:53.326 { 00:18:53.326 "method": "bdev_raid_set_options", 00:18:53.326 "params": { 00:18:53.326 "process_window_size_kb": 1024, 00:18:53.326 "process_max_bandwidth_mb_sec": 0 00:18:53.326 } 00:18:53.326 }, 00:18:53.326 { 00:18:53.326 "method": "bdev_iscsi_set_options", 00:18:53.326 "params": { 00:18:53.326 "timeout_sec": 30 00:18:53.326 } 00:18:53.326 }, 00:18:53.326 { 00:18:53.326 "method": "bdev_nvme_set_options", 00:18:53.326 "params": { 00:18:53.326 "action_on_timeout": "none", 00:18:53.326 "timeout_us": 0, 00:18:53.326 "timeout_admin_us": 0, 00:18:53.326 "keep_alive_timeout_ms": 10000, 00:18:53.326 "arbitration_burst": 0, 00:18:53.326 "low_priority_weight": 0, 00:18:53.326 "medium_priority_weight": 0, 00:18:53.326 "high_priority_weight": 0, 00:18:53.326 "nvme_adminq_poll_period_us": 10000, 00:18:53.326 "nvme_ioq_poll_period_us": 0, 00:18:53.326 "io_queue_requests": 0, 00:18:53.326 "delay_cmd_submit": true, 00:18:53.326 "transport_retry_count": 4, 00:18:53.326 "bdev_retry_count": 3, 00:18:53.326 "transport_ack_timeout": 0, 00:18:53.326 "ctrlr_loss_timeout_sec": 0, 00:18:53.326 "reconnect_delay_sec": 0, 00:18:53.326 "fast_io_fail_timeout_sec": 0, 00:18:53.326 "disable_auto_failback": false, 00:18:53.326 "generate_uuids": false, 00:18:53.326 "transport_tos": 0, 00:18:53.326 "nvme_error_stat": false, 00:18:53.326 "rdma_srq_size": 0, 00:18:53.326 "io_path_stat": false, 00:18:53.326 "allow_accel_sequence": false, 00:18:53.326 "rdma_max_cq_size": 0, 00:18:53.326 "rdma_cm_event_timeout_ms": 0, 00:18:53.326 "dhchap_digests": [ 00:18:53.326 "sha256", 00:18:53.326 "sha384", 00:18:53.326 "sha512" 00:18:53.326 ], 00:18:53.326 "dhchap_dhgroups": [ 00:18:53.326 "null", 00:18:53.326 "ffdhe2048", 00:18:53.326 "ffdhe3072", 00:18:53.326 "ffdhe4096", 00:18:53.326 "ffdhe6144", 00:18:53.326 "ffdhe8192" 00:18:53.326 ] 00:18:53.326 } 00:18:53.326 }, 00:18:53.326 { 00:18:53.326 "method": "bdev_nvme_set_hotplug", 00:18:53.326 "params": { 00:18:53.326 "period_us": 100000, 00:18:53.326 "enable": false 00:18:53.326 } 00:18:53.326 }, 00:18:53.326 { 00:18:53.326 "method": "bdev_malloc_create", 00:18:53.326 "params": { 00:18:53.326 "name": "malloc0", 00:18:53.326 "num_blocks": 8192, 00:18:53.326 "block_size": 4096, 00:18:53.326 "physical_block_size": 4096, 00:18:53.326 "uuid": "e6f919aa-a2b4-4799-a43d-327de7dd8ca8", 00:18:53.326 "optimal_io_boundary": 0, 00:18:53.326 "md_size": 0, 00:18:53.326 "dif_type": 0, 00:18:53.326 "dif_is_head_of_md": false, 00:18:53.326 "dif_pi_format": 0 00:18:53.326 } 00:18:53.326 }, 00:18:53.326 { 00:18:53.326 "method": "bdev_wait_for_examine" 00:18:53.326 } 00:18:53.326 ] 00:18:53.326 }, 00:18:53.326 { 00:18:53.326 "subsystem": "nbd", 00:18:53.326 "config": [] 00:18:53.327 }, 00:18:53.327 { 00:18:53.327 "subsystem": "scheduler", 00:18:53.327 "config": [ 00:18:53.327 { 00:18:53.327 "method": "framework_set_scheduler", 00:18:53.327 "params": { 00:18:53.327 "name": "static" 00:18:53.327 } 00:18:53.327 } 00:18:53.327 ] 00:18:53.327 }, 00:18:53.327 { 00:18:53.327 "subsystem": "nvmf", 00:18:53.327 "config": [ 00:18:53.327 { 00:18:53.327 "method": "nvmf_set_config", 00:18:53.327 "params": { 00:18:53.327 "discovery_filter": "match_any", 00:18:53.327 "admin_cmd_passthru": { 00:18:53.327 "identify_ctrlr": false 00:18:53.327 }, 00:18:53.327 "dhchap_digests": [ 00:18:53.327 "sha256", 00:18:53.327 "sha384", 00:18:53.327 "sha512" 00:18:53.327 ], 00:18:53.327 "dhchap_dhgroups": [ 00:18:53.327 "null", 00:18:53.327 "ffdhe2048", 00:18:53.327 "ffdhe3072", 00:18:53.327 "ffdhe4096", 00:18:53.327 "ffdhe6144", 00:18:53.327 "ffdhe8192" 00:18:53.327 ] 00:18:53.327 } 00:18:53.327 }, 00:18:53.327 { 00:18:53.327 "method": "nvmf_set_max_subsystems", 00:18:53.327 "params": { 00:18:53.327 "max_subsystems": 1024 00:18:53.327 } 00:18:53.327 }, 00:18:53.327 { 00:18:53.327 "method": "nvmf_set_crdt", 00:18:53.327 "params": { 00:18:53.327 "crdt1": 0, 00:18:53.327 "crdt2": 0, 00:18:53.327 "crdt3": 0 00:18:53.327 } 00:18:53.327 }, 00:18:53.327 { 00:18:53.327 "method": "nvmf_create_transport", 00:18:53.327 "params": { 00:18:53.327 "trtype": "TCP", 00:18:53.327 "max_queue_depth": 128, 00:18:53.327 "max_io_qpairs_per_ctrlr": 127, 00:18:53.327 "in_capsule_data_size": 4096, 00:18:53.327 "max_io_size": 131072, 00:18:53.327 "io_unit_size": 131072, 00:18:53.327 "max_aq_depth": 128, 00:18:53.327 "num_shared_buffers": 511, 00:18:53.327 "buf_cache_size": 4294967295, 00:18:53.327 "dif_insert_or_strip": false, 00:18:53.327 "zcopy": false, 00:18:53.327 "c2h_success": false, 00:18:53.327 "sock_priority": 0, 00:18:53.327 "abort_timeout_sec": 1, 00:18:53.327 "ack_timeout": 0, 00:18:53.327 "data_wr_pool_size": 0 00:18:53.327 } 00:18:53.327 }, 00:18:53.327 { 00:18:53.327 "method": "nvmf_create_subsystem", 00:18:53.327 "params": { 00:18:53.327 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:53.327 "allow_any_host": false, 00:18:53.327 "serial_number": "00000000000000000000", 00:18:53.327 "model_number": "SPDK bdev Controller", 00:18:53.327 "max_namespaces": 32, 00:18:53.327 "min_cntlid": 1, 00:18:53.327 "max_cntlid": 65519, 00:18:53.327 "ana_reporting": false 00:18:53.327 } 00:18:53.327 }, 00:18:53.327 { 00:18:53.327 "method": "nvmf_subsystem_add_host", 00:18:53.327 "params": { 00:18:53.327 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:53.327 "host": "nqn.2016-06.io.spdk:host1", 00:18:53.327 "psk": "key0" 00:18:53.327 } 00:18:53.327 }, 00:18:53.327 { 00:18:53.327 "method": "nvmf_subsystem_add_ns", 00:18:53.327 "params": { 00:18:53.327 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:53.327 "namespace": { 00:18:53.327 "nsid": 1, 00:18:53.327 "bdev_name": "malloc0", 00:18:53.327 "nguid": "E6F919AAA2B44799A43D327DE7DD8CA8", 00:18:53.327 "uuid": "e6f919aa-a2b4-4799-a43d-327de7dd8ca8", 00:18:53.327 "no_auto_visible": false 00:18:53.327 } 00:18:53.327 } 00:18:53.327 }, 00:18:53.327 { 00:18:53.327 "method": "nvmf_subsystem_add_listener", 00:18:53.327 "params": { 00:18:53.327 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:53.327 "listen_address": { 00:18:53.327 "trtype": "TCP", 00:18:53.327 "adrfam": "IPv4", 00:18:53.327 "traddr": "10.0.0.2", 00:18:53.327 "trsvcid": "4420" 00:18:53.327 }, 00:18:53.327 "secure_channel": false, 00:18:53.327 "sock_impl": "ssl" 00:18:53.327 } 00:18:53.327 } 00:18:53.327 ] 00:18:53.327 } 00:18:53.327 ] 00:18:53.327 }' 00:18:53.327 07:14:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:53.587 07:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:18:53.587 "subsystems": [ 00:18:53.587 { 00:18:53.587 "subsystem": "keyring", 00:18:53.587 "config": [ 00:18:53.587 { 00:18:53.587 "method": "keyring_file_add_key", 00:18:53.587 "params": { 00:18:53.587 "name": "key0", 00:18:53.587 "path": "/tmp/tmp.lWWn6jfH0s" 00:18:53.587 } 00:18:53.587 } 00:18:53.587 ] 00:18:53.587 }, 00:18:53.587 { 00:18:53.587 "subsystem": "iobuf", 00:18:53.587 "config": [ 00:18:53.587 { 00:18:53.587 "method": "iobuf_set_options", 00:18:53.587 "params": { 00:18:53.587 "small_pool_count": 8192, 00:18:53.587 "large_pool_count": 1024, 00:18:53.587 "small_bufsize": 8192, 00:18:53.587 "large_bufsize": 135168, 00:18:53.587 "enable_numa": false 00:18:53.587 } 00:18:53.587 } 00:18:53.587 ] 00:18:53.587 }, 00:18:53.587 { 00:18:53.587 "subsystem": "sock", 00:18:53.587 "config": [ 00:18:53.587 { 00:18:53.587 "method": "sock_set_default_impl", 00:18:53.587 "params": { 00:18:53.587 "impl_name": "posix" 00:18:53.587 } 00:18:53.587 }, 00:18:53.587 { 00:18:53.587 "method": "sock_impl_set_options", 00:18:53.587 "params": { 00:18:53.587 "impl_name": "ssl", 00:18:53.587 "recv_buf_size": 4096, 00:18:53.587 "send_buf_size": 4096, 00:18:53.587 "enable_recv_pipe": true, 00:18:53.587 "enable_quickack": false, 00:18:53.587 "enable_placement_id": 0, 00:18:53.587 "enable_zerocopy_send_server": true, 00:18:53.587 "enable_zerocopy_send_client": false, 00:18:53.587 "zerocopy_threshold": 0, 00:18:53.587 "tls_version": 0, 00:18:53.587 "enable_ktls": false 00:18:53.587 } 00:18:53.587 }, 00:18:53.587 { 00:18:53.587 "method": "sock_impl_set_options", 00:18:53.587 "params": { 00:18:53.587 "impl_name": "posix", 00:18:53.587 "recv_buf_size": 2097152, 00:18:53.587 "send_buf_size": 2097152, 00:18:53.587 "enable_recv_pipe": true, 00:18:53.587 "enable_quickack": false, 00:18:53.587 "enable_placement_id": 0, 00:18:53.587 "enable_zerocopy_send_server": true, 00:18:53.587 "enable_zerocopy_send_client": false, 00:18:53.587 "zerocopy_threshold": 0, 00:18:53.587 "tls_version": 0, 00:18:53.587 "enable_ktls": false 00:18:53.587 } 00:18:53.587 } 00:18:53.587 ] 00:18:53.587 }, 00:18:53.587 { 00:18:53.587 "subsystem": "vmd", 00:18:53.587 "config": [] 00:18:53.587 }, 00:18:53.587 { 00:18:53.587 "subsystem": "accel", 00:18:53.587 "config": [ 00:18:53.587 { 00:18:53.587 "method": "accel_set_options", 00:18:53.587 "params": { 00:18:53.587 "small_cache_size": 128, 00:18:53.587 "large_cache_size": 16, 00:18:53.587 "task_count": 2048, 00:18:53.587 "sequence_count": 2048, 00:18:53.587 "buf_count": 2048 00:18:53.587 } 00:18:53.587 } 00:18:53.587 ] 00:18:53.587 }, 00:18:53.587 { 00:18:53.587 "subsystem": "bdev", 00:18:53.587 "config": [ 00:18:53.587 { 00:18:53.587 "method": "bdev_set_options", 00:18:53.587 "params": { 00:18:53.587 "bdev_io_pool_size": 65535, 00:18:53.587 "bdev_io_cache_size": 256, 00:18:53.587 "bdev_auto_examine": true, 00:18:53.587 "iobuf_small_cache_size": 128, 00:18:53.587 "iobuf_large_cache_size": 16 00:18:53.587 } 00:18:53.587 }, 00:18:53.587 { 00:18:53.587 "method": "bdev_raid_set_options", 00:18:53.587 "params": { 00:18:53.587 "process_window_size_kb": 1024, 00:18:53.587 "process_max_bandwidth_mb_sec": 0 00:18:53.587 } 00:18:53.587 }, 00:18:53.587 { 00:18:53.587 "method": "bdev_iscsi_set_options", 00:18:53.587 "params": { 00:18:53.587 "timeout_sec": 30 00:18:53.587 } 00:18:53.587 }, 00:18:53.587 { 00:18:53.587 "method": "bdev_nvme_set_options", 00:18:53.587 "params": { 00:18:53.587 "action_on_timeout": "none", 00:18:53.587 "timeout_us": 0, 00:18:53.587 "timeout_admin_us": 0, 00:18:53.587 "keep_alive_timeout_ms": 10000, 00:18:53.587 "arbitration_burst": 0, 00:18:53.587 "low_priority_weight": 0, 00:18:53.587 "medium_priority_weight": 0, 00:18:53.587 "high_priority_weight": 0, 00:18:53.587 "nvme_adminq_poll_period_us": 10000, 00:18:53.587 "nvme_ioq_poll_period_us": 0, 00:18:53.587 "io_queue_requests": 512, 00:18:53.587 "delay_cmd_submit": true, 00:18:53.587 "transport_retry_count": 4, 00:18:53.587 "bdev_retry_count": 3, 00:18:53.587 "transport_ack_timeout": 0, 00:18:53.587 "ctrlr_loss_timeout_sec": 0, 00:18:53.587 "reconnect_delay_sec": 0, 00:18:53.587 "fast_io_fail_timeout_sec": 0, 00:18:53.587 "disable_auto_failback": false, 00:18:53.587 "generate_uuids": false, 00:18:53.587 "transport_tos": 0, 00:18:53.587 "nvme_error_stat": false, 00:18:53.587 "rdma_srq_size": 0, 00:18:53.587 "io_path_stat": false, 00:18:53.587 "allow_accel_sequence": false, 00:18:53.587 "rdma_max_cq_size": 0, 00:18:53.587 "rdma_cm_event_timeout_ms": 0, 00:18:53.587 "dhchap_digests": [ 00:18:53.587 "sha256", 00:18:53.587 "sha384", 00:18:53.587 "sha512" 00:18:53.587 ], 00:18:53.587 "dhchap_dhgroups": [ 00:18:53.587 "null", 00:18:53.587 "ffdhe2048", 00:18:53.587 "ffdhe3072", 00:18:53.587 "ffdhe4096", 00:18:53.587 "ffdhe6144", 00:18:53.587 "ffdhe8192" 00:18:53.587 ] 00:18:53.587 } 00:18:53.587 }, 00:18:53.587 { 00:18:53.587 "method": "bdev_nvme_attach_controller", 00:18:53.587 "params": { 00:18:53.587 "name": "nvme0", 00:18:53.587 "trtype": "TCP", 00:18:53.587 "adrfam": "IPv4", 00:18:53.587 "traddr": "10.0.0.2", 00:18:53.587 "trsvcid": "4420", 00:18:53.587 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:53.587 "prchk_reftag": false, 00:18:53.587 "prchk_guard": false, 00:18:53.587 "ctrlr_loss_timeout_sec": 0, 00:18:53.587 "reconnect_delay_sec": 0, 00:18:53.587 "fast_io_fail_timeout_sec": 0, 00:18:53.587 "psk": "key0", 00:18:53.587 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:53.587 "hdgst": false, 00:18:53.587 "ddgst": false, 00:18:53.587 "multipath": "multipath" 00:18:53.587 } 00:18:53.587 }, 00:18:53.587 { 00:18:53.587 "method": "bdev_nvme_set_hotplug", 00:18:53.587 "params": { 00:18:53.587 "period_us": 100000, 00:18:53.587 "enable": false 00:18:53.587 } 00:18:53.587 }, 00:18:53.587 { 00:18:53.587 "method": "bdev_enable_histogram", 00:18:53.587 "params": { 00:18:53.587 "name": "nvme0n1", 00:18:53.587 "enable": true 00:18:53.588 } 00:18:53.588 }, 00:18:53.588 { 00:18:53.588 "method": "bdev_wait_for_examine" 00:18:53.588 } 00:18:53.588 ] 00:18:53.588 }, 00:18:53.588 { 00:18:53.588 "subsystem": "nbd", 00:18:53.588 "config": [] 00:18:53.588 } 00:18:53.588 ] 00:18:53.588 }' 00:18:53.588 07:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1214448 00:18:53.588 07:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1214448 ']' 00:18:53.588 07:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1214448 00:18:53.588 07:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:53.588 07:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:53.588 07:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1214448 00:18:53.588 07:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:53.588 07:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:53.588 07:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1214448' 00:18:53.588 killing process with pid 1214448 00:18:53.588 07:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1214448 00:18:53.588 Received shutdown signal, test time was about 1.000000 seconds 00:18:53.588 00:18:53.588 Latency(us) 00:18:53.588 [2024-11-20T06:14:58.144Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.588 [2024-11-20T06:14:58.144Z] =================================================================================================================== 00:18:53.588 [2024-11-20T06:14:58.144Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:53.588 07:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1214448 00:18:53.847 07:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1214426 00:18:53.847 07:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1214426 ']' 00:18:53.847 07:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1214426 00:18:53.847 07:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:53.847 07:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:53.847 07:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1214426 00:18:53.847 07:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:53.847 07:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:53.847 07:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1214426' 00:18:53.847 killing process with pid 1214426 00:18:53.847 07:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1214426 00:18:53.847 07:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1214426 00:18:54.106 07:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:18:54.106 07:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:54.106 07:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:54.106 07:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:18:54.106 "subsystems": [ 00:18:54.106 { 00:18:54.106 "subsystem": "keyring", 00:18:54.106 "config": [ 00:18:54.106 { 00:18:54.106 "method": "keyring_file_add_key", 00:18:54.106 "params": { 00:18:54.106 "name": "key0", 00:18:54.106 "path": "/tmp/tmp.lWWn6jfH0s" 00:18:54.106 } 00:18:54.106 } 00:18:54.106 ] 00:18:54.106 }, 00:18:54.106 { 00:18:54.106 "subsystem": "iobuf", 00:18:54.106 "config": [ 00:18:54.106 { 00:18:54.106 "method": "iobuf_set_options", 00:18:54.106 "params": { 00:18:54.106 "small_pool_count": 8192, 00:18:54.106 "large_pool_count": 1024, 00:18:54.106 "small_bufsize": 8192, 00:18:54.106 "large_bufsize": 135168, 00:18:54.106 "enable_numa": false 00:18:54.106 } 00:18:54.106 } 00:18:54.106 ] 00:18:54.106 }, 00:18:54.106 { 00:18:54.106 "subsystem": "sock", 00:18:54.106 "config": [ 00:18:54.106 { 00:18:54.106 "method": "sock_set_default_impl", 00:18:54.106 "params": { 00:18:54.106 "impl_name": "posix" 00:18:54.106 } 00:18:54.106 }, 00:18:54.106 { 00:18:54.106 "method": "sock_impl_set_options", 00:18:54.106 "params": { 00:18:54.106 "impl_name": "ssl", 00:18:54.106 "recv_buf_size": 4096, 00:18:54.106 "send_buf_size": 4096, 00:18:54.106 "enable_recv_pipe": true, 00:18:54.106 "enable_quickack": false, 00:18:54.106 "enable_placement_id": 0, 00:18:54.106 "enable_zerocopy_send_server": true, 00:18:54.106 "enable_zerocopy_send_client": false, 00:18:54.106 "zerocopy_threshold": 0, 00:18:54.106 "tls_version": 0, 00:18:54.106 "enable_ktls": false 00:18:54.106 } 00:18:54.106 }, 00:18:54.106 { 00:18:54.106 "method": "sock_impl_set_options", 00:18:54.106 "params": { 00:18:54.106 "impl_name": "posix", 00:18:54.106 "recv_buf_size": 2097152, 00:18:54.106 "send_buf_size": 2097152, 00:18:54.106 "enable_recv_pipe": true, 00:18:54.106 "enable_quickack": false, 00:18:54.106 "enable_placement_id": 0, 00:18:54.106 "enable_zerocopy_send_server": true, 00:18:54.106 "enable_zerocopy_send_client": false, 00:18:54.106 "zerocopy_threshold": 0, 00:18:54.106 "tls_version": 0, 00:18:54.106 "enable_ktls": false 00:18:54.106 } 00:18:54.106 } 00:18:54.106 ] 00:18:54.106 }, 00:18:54.106 { 00:18:54.106 "subsystem": "vmd", 00:18:54.106 "config": [] 00:18:54.106 }, 00:18:54.106 { 00:18:54.106 "subsystem": "accel", 00:18:54.106 "config": [ 00:18:54.106 { 00:18:54.106 "method": "accel_set_options", 00:18:54.106 "params": { 00:18:54.106 "small_cache_size": 128, 00:18:54.107 "large_cache_size": 16, 00:18:54.107 "task_count": 2048, 00:18:54.107 "sequence_count": 2048, 00:18:54.107 "buf_count": 2048 00:18:54.107 } 00:18:54.107 } 00:18:54.107 ] 00:18:54.107 }, 00:18:54.107 { 00:18:54.107 "subsystem": "bdev", 00:18:54.107 "config": [ 00:18:54.107 { 00:18:54.107 "method": "bdev_set_options", 00:18:54.107 "params": { 00:18:54.107 "bdev_io_pool_size": 65535, 00:18:54.107 "bdev_io_cache_size": 256, 00:18:54.107 "bdev_auto_examine": true, 00:18:54.107 "iobuf_small_cache_size": 128, 00:18:54.107 "iobuf_large_cache_size": 16 00:18:54.107 } 00:18:54.107 }, 00:18:54.107 { 00:18:54.107 "method": "bdev_raid_set_options", 00:18:54.107 "params": { 00:18:54.107 "process_window_size_kb": 1024, 00:18:54.107 "process_max_bandwidth_mb_sec": 0 00:18:54.107 } 00:18:54.107 }, 00:18:54.107 { 00:18:54.107 "method": "bdev_iscsi_set_options", 00:18:54.107 "params": { 00:18:54.107 "timeout_sec": 30 00:18:54.107 } 00:18:54.107 }, 00:18:54.107 { 00:18:54.107 "method": "bdev_nvme_set_options", 00:18:54.107 "params": { 00:18:54.107 "action_on_timeout": "none", 00:18:54.107 "timeout_us": 0, 00:18:54.107 "timeout_admin_us": 0, 00:18:54.107 "keep_alive_timeout_ms": 10000, 00:18:54.107 "arbitration_burst": 0, 00:18:54.107 "low_priority_weight": 0, 00:18:54.107 "medium_priority_weight": 0, 00:18:54.107 "high_priority_weight": 0, 00:18:54.107 "nvme_adminq_poll_period_us": 10000, 00:18:54.107 "nvme_ioq_poll_period_us": 0, 00:18:54.107 "io_queue_requests": 0, 00:18:54.107 "delay_cmd_submit": true, 00:18:54.107 "transport_retry_count": 4, 00:18:54.107 "bdev_retry_count": 3, 00:18:54.107 "transport_ack_timeout": 0, 00:18:54.107 "ctrlr_loss_timeout_sec": 0, 00:18:54.107 "reconnect_delay_sec": 0, 00:18:54.107 "fast_io_fail_timeout_sec": 0, 00:18:54.107 "disable_auto_failback": false, 00:18:54.107 "generate_uuids": false, 00:18:54.107 "transport_tos": 0, 00:18:54.107 "nvme_error_stat": false, 00:18:54.107 "rdma_srq_size": 0, 00:18:54.107 "io_path_stat": false, 00:18:54.107 "allow_accel_sequence": false, 00:18:54.107 "rdma_max_cq_size": 0, 00:18:54.107 "rdma_cm_event_timeout_ms": 0, 00:18:54.107 "dhchap_digests": [ 00:18:54.107 "sha256", 00:18:54.107 "sha384", 00:18:54.107 "sha512" 00:18:54.107 ], 00:18:54.107 "dhchap_dhgroups": [ 00:18:54.107 "null", 00:18:54.107 "ffdhe2048", 00:18:54.107 "ffdhe3072", 00:18:54.107 "ffdhe4096", 00:18:54.107 "ffdhe6144", 00:18:54.107 "ffdhe8192" 00:18:54.107 ] 00:18:54.107 } 00:18:54.107 }, 00:18:54.107 { 00:18:54.107 "method": "bdev_nvme_set_hotplug", 00:18:54.107 "params": { 00:18:54.107 "period_us": 100000, 00:18:54.107 "enable": false 00:18:54.107 } 00:18:54.107 }, 00:18:54.107 { 00:18:54.107 "method": "bdev_malloc_create", 00:18:54.107 "params": { 00:18:54.107 "name": "malloc0", 00:18:54.107 "num_blocks": 8192, 00:18:54.107 "block_size": 4096, 00:18:54.107 "physical_block_size": 4096, 00:18:54.107 "uuid": "e6f919aa-a2b4-4799-a43d-327de7dd8ca8", 00:18:54.107 "optimal_io_boundary": 0, 00:18:54.107 "md_size": 0, 00:18:54.107 "dif_type": 0, 00:18:54.107 "dif_is_head_of_md": false, 00:18:54.107 "dif_pi_format": 0 00:18:54.107 } 00:18:54.107 }, 00:18:54.107 { 00:18:54.107 "method": "bdev_wait_for_examine" 00:18:54.107 } 00:18:54.107 ] 00:18:54.107 }, 00:18:54.107 { 00:18:54.107 "subsystem": "nbd", 00:18:54.107 "config": [] 00:18:54.107 }, 00:18:54.107 { 00:18:54.107 "subsystem": "scheduler", 00:18:54.107 "config": [ 00:18:54.107 { 00:18:54.107 "method": "framework_set_scheduler", 00:18:54.107 "params": { 00:18:54.107 "name": "static" 00:18:54.107 } 00:18:54.107 } 00:18:54.107 ] 00:18:54.107 }, 00:18:54.107 { 00:18:54.107 "subsystem": "nvmf", 00:18:54.107 "config": [ 00:18:54.107 { 00:18:54.107 "method": "nvmf_set_config", 00:18:54.107 "params": { 00:18:54.107 "discovery_filter": "match_any", 00:18:54.107 "admin_cmd_passthru": { 00:18:54.107 "identify_ctrlr": false 00:18:54.107 }, 00:18:54.107 "dhchap_digests": [ 00:18:54.107 "sha256", 00:18:54.107 "sha384", 00:18:54.107 "sha512" 00:18:54.107 ], 00:18:54.107 "dhchap_dhgroups": [ 00:18:54.107 "null", 00:18:54.107 "ffdhe2048", 00:18:54.107 "ffdhe3072", 00:18:54.107 "ffdhe4096", 00:18:54.107 "ffdhe6144", 00:18:54.107 "ffdhe8192" 00:18:54.107 ] 00:18:54.107 } 00:18:54.107 }, 00:18:54.107 { 00:18:54.107 "method": "nvmf_set_max_subsystems", 00:18:54.107 "params": { 00:18:54.107 "max_subsystems": 1024 00:18:54.107 } 00:18:54.107 }, 00:18:54.107 { 00:18:54.107 "method": "nvmf_set_crdt", 00:18:54.107 "params": { 00:18:54.107 "crdt1": 0, 00:18:54.107 "crdt2": 0, 00:18:54.107 "crdt3": 0 00:18:54.107 } 00:18:54.107 }, 00:18:54.107 { 00:18:54.107 "method": "nvmf_create_transport", 00:18:54.107 "params": { 00:18:54.107 "trtype": "TCP", 00:18:54.107 "max_queue_depth": 128, 00:18:54.107 "max_io_qpairs_per_ctrlr": 127, 00:18:54.107 "in_capsule_data_size": 4096, 00:18:54.107 "max_io_size": 131072, 00:18:54.107 "io_unit_size": 131072, 00:18:54.107 "max_aq_depth": 128, 00:18:54.107 "num_shared_buffers": 511, 00:18:54.107 "buf_cache_size": 4294967295, 00:18:54.107 "dif_insert_or_strip": false, 00:18:54.107 "zcopy": false, 00:18:54.107 "c2h_success": false, 00:18:54.107 "sock_priority": 0, 00:18:54.107 "abort_timeout_sec": 1, 00:18:54.107 "ack_timeout": 0, 00:18:54.107 "data_wr_pool_size": 0 00:18:54.107 } 00:18:54.107 }, 00:18:54.107 { 00:18:54.107 "method": "nvmf_create_subsystem", 00:18:54.107 "params": { 00:18:54.107 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:54.107 "allow_any_host": false, 00:18:54.107 "serial_number": "00000000000000000000", 00:18:54.107 "model_number": "SPDK bdev Controller", 00:18:54.107 "max_namespaces": 32, 00:18:54.107 "min_cntlid": 1, 00:18:54.107 "max_cntlid": 65519, 00:18:54.107 "ana_reporting": false 00:18:54.107 } 00:18:54.107 }, 00:18:54.107 { 00:18:54.107 "method": "nvmf_subsystem_add_host", 00:18:54.107 "params": { 00:18:54.107 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:54.107 "host": "nqn.2016-06.io.spdk:host1", 00:18:54.107 "psk": "key0" 00:18:54.107 } 00:18:54.107 }, 00:18:54.107 { 00:18:54.107 "method": "nvmf_subsystem_add_ns", 00:18:54.107 "params": { 00:18:54.107 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:54.107 "namespace": { 00:18:54.107 "nsid": 1, 00:18:54.107 "bdev_name": "malloc0", 00:18:54.107 "nguid": "E6F919AAA2B44799A43D327DE7DD8CA8", 00:18:54.107 "uuid": "e6f919aa-a2b4-4799-a43d-327de7dd8ca8", 00:18:54.107 "no_auto_visible": false 00:18:54.107 } 00:18:54.107 } 00:18:54.107 }, 00:18:54.107 { 00:18:54.107 "method": "nvmf_subsystem_add_listener", 00:18:54.107 "params": { 00:18:54.107 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:54.107 "listen_address": { 00:18:54.107 "trtype": "TCP", 00:18:54.107 "adrfam": "IPv4", 00:18:54.107 "traddr": "10.0.0.2", 00:18:54.107 "trsvcid": "4420" 00:18:54.107 }, 00:18:54.107 "secure_channel": false, 00:18:54.107 "sock_impl": "ssl" 00:18:54.107 } 00:18:54.107 } 00:18:54.107 ] 00:18:54.107 } 00:18:54.107 ] 00:18:54.107 }' 00:18:54.107 07:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.107 07:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1214925 00:18:54.107 07:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:54.107 07:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1214925 00:18:54.107 07:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1214925 ']' 00:18:54.107 07:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:54.107 07:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:54.107 07:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:54.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:54.107 07:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:54.107 07:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.107 [2024-11-20 07:14:58.520548] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:18:54.107 [2024-11-20 07:14:58.520593] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:54.107 [2024-11-20 07:14:58.596358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.107 [2024-11-20 07:14:58.636842] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:54.107 [2024-11-20 07:14:58.636879] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:54.108 [2024-11-20 07:14:58.636887] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:54.108 [2024-11-20 07:14:58.636893] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:54.108 [2024-11-20 07:14:58.636898] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:54.108 [2024-11-20 07:14:58.637480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.367 [2024-11-20 07:14:58.852074] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:54.367 [2024-11-20 07:14:58.884115] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:54.367 [2024-11-20 07:14:58.884310] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:54.947 07:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:54.947 07:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:54.947 07:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:54.947 07:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:54.947 07:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.947 07:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:54.947 07:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1215107 00:18:54.947 07:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1215107 /var/tmp/bdevperf.sock 00:18:54.947 07:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1215107 ']' 00:18:54.947 07:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:54.947 07:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:54.947 07:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:54.947 07:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:54.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:54.947 07:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:18:54.947 "subsystems": [ 00:18:54.947 { 00:18:54.947 "subsystem": "keyring", 00:18:54.947 "config": [ 00:18:54.947 { 00:18:54.947 "method": "keyring_file_add_key", 00:18:54.947 "params": { 00:18:54.947 "name": "key0", 00:18:54.947 "path": "/tmp/tmp.lWWn6jfH0s" 00:18:54.947 } 00:18:54.947 } 00:18:54.947 ] 00:18:54.947 }, 00:18:54.947 { 00:18:54.947 "subsystem": "iobuf", 00:18:54.947 "config": [ 00:18:54.947 { 00:18:54.947 "method": "iobuf_set_options", 00:18:54.947 "params": { 00:18:54.947 "small_pool_count": 8192, 00:18:54.947 "large_pool_count": 1024, 00:18:54.947 "small_bufsize": 8192, 00:18:54.947 "large_bufsize": 135168, 00:18:54.947 "enable_numa": false 00:18:54.947 } 00:18:54.947 } 00:18:54.947 ] 00:18:54.947 }, 00:18:54.947 { 00:18:54.947 "subsystem": "sock", 00:18:54.947 "config": [ 00:18:54.947 { 00:18:54.947 "method": "sock_set_default_impl", 00:18:54.947 "params": { 00:18:54.947 "impl_name": "posix" 00:18:54.947 } 00:18:54.947 }, 00:18:54.947 { 00:18:54.947 "method": "sock_impl_set_options", 00:18:54.947 "params": { 00:18:54.947 "impl_name": "ssl", 00:18:54.947 "recv_buf_size": 4096, 00:18:54.947 "send_buf_size": 4096, 00:18:54.947 "enable_recv_pipe": true, 00:18:54.947 "enable_quickack": false, 00:18:54.947 "enable_placement_id": 0, 00:18:54.947 "enable_zerocopy_send_server": true, 00:18:54.947 "enable_zerocopy_send_client": false, 00:18:54.947 "zerocopy_threshold": 0, 00:18:54.947 "tls_version": 0, 00:18:54.947 "enable_ktls": false 00:18:54.947 } 00:18:54.947 }, 00:18:54.947 { 00:18:54.947 "method": "sock_impl_set_options", 00:18:54.947 "params": { 00:18:54.947 "impl_name": "posix", 00:18:54.947 "recv_buf_size": 2097152, 00:18:54.947 "send_buf_size": 2097152, 00:18:54.947 "enable_recv_pipe": true, 00:18:54.947 "enable_quickack": false, 00:18:54.947 "enable_placement_id": 0, 00:18:54.947 "enable_zerocopy_send_server": true, 00:18:54.947 "enable_zerocopy_send_client": false, 00:18:54.947 "zerocopy_threshold": 0, 00:18:54.947 "tls_version": 0, 00:18:54.947 "enable_ktls": false 00:18:54.947 } 00:18:54.947 } 00:18:54.947 ] 00:18:54.947 }, 00:18:54.947 { 00:18:54.947 "subsystem": "vmd", 00:18:54.947 "config": [] 00:18:54.947 }, 00:18:54.947 { 00:18:54.947 "subsystem": "accel", 00:18:54.947 "config": [ 00:18:54.947 { 00:18:54.947 "method": "accel_set_options", 00:18:54.947 "params": { 00:18:54.947 "small_cache_size": 128, 00:18:54.947 "large_cache_size": 16, 00:18:54.947 "task_count": 2048, 00:18:54.947 "sequence_count": 2048, 00:18:54.947 "buf_count": 2048 00:18:54.947 } 00:18:54.947 } 00:18:54.947 ] 00:18:54.947 }, 00:18:54.947 { 00:18:54.947 "subsystem": "bdev", 00:18:54.947 "config": [ 00:18:54.947 { 00:18:54.947 "method": "bdev_set_options", 00:18:54.947 "params": { 00:18:54.947 "bdev_io_pool_size": 65535, 00:18:54.947 "bdev_io_cache_size": 256, 00:18:54.947 "bdev_auto_examine": true, 00:18:54.947 "iobuf_small_cache_size": 128, 00:18:54.947 "iobuf_large_cache_size": 16 00:18:54.947 } 00:18:54.947 }, 00:18:54.947 { 00:18:54.947 "method": "bdev_raid_set_options", 00:18:54.947 "params": { 00:18:54.947 "process_window_size_kb": 1024, 00:18:54.947 "process_max_bandwidth_mb_sec": 0 00:18:54.947 } 00:18:54.947 }, 00:18:54.947 { 00:18:54.947 "method": "bdev_iscsi_set_options", 00:18:54.947 "params": { 00:18:54.947 "timeout_sec": 30 00:18:54.947 } 00:18:54.947 }, 00:18:54.947 { 00:18:54.947 "method": "bdev_nvme_set_options", 00:18:54.947 "params": { 00:18:54.947 "action_on_timeout": "none", 00:18:54.947 "timeout_us": 0, 00:18:54.947 "timeout_admin_us": 0, 00:18:54.947 "keep_alive_timeout_ms": 10000, 00:18:54.947 "arbitration_burst": 0, 00:18:54.947 "low_priority_weight": 0, 00:18:54.947 "medium_priority_weight": 0, 00:18:54.947 "high_priority_weight": 0, 00:18:54.947 "nvme_adminq_poll_period_us": 10000, 00:18:54.947 "nvme_ioq_poll_period_us": 0, 00:18:54.947 "io_queue_requests": 512, 00:18:54.947 "delay_cmd_submit": true, 00:18:54.947 "transport_retry_count": 4, 00:18:54.947 "bdev_retry_count": 3, 00:18:54.947 "transport_ack_timeout": 0, 00:18:54.947 "ctrlr_loss_timeout_sec": 0, 00:18:54.947 "reconnect_delay_sec": 0, 00:18:54.947 "fast_io_fail_timeout_sec": 0, 00:18:54.948 "disable_auto_failback": false, 00:18:54.948 "generate_uuids": false, 00:18:54.948 "transport_tos": 0, 00:18:54.948 "nvme_error_stat": false, 00:18:54.948 "rdma_srq_size": 0, 00:18:54.948 "io_path_stat": false, 00:18:54.948 "allow_accel_sequence": false, 00:18:54.948 "rdma_max_cq_size": 0, 00:18:54.948 "rdma_cm_event_timeout_ms": 0, 00:18:54.948 "dhchap_digests": [ 00:18:54.948 "sha256", 00:18:54.948 "sha384", 00:18:54.948 "sha512" 00:18:54.948 ], 00:18:54.948 "dhchap_dhgroups": [ 00:18:54.948 "null", 00:18:54.948 "ffdhe2048", 00:18:54.948 "ffdhe3072", 00:18:54.948 "ffdhe4096", 00:18:54.948 "ffdhe6144", 00:18:54.948 "ffdhe8192" 00:18:54.948 ] 00:18:54.948 } 00:18:54.948 }, 00:18:54.948 { 00:18:54.948 "method": "bdev_nvme_attach_controller", 00:18:54.948 "params": { 00:18:54.948 "name": "nvme0", 00:18:54.948 "trtype": "TCP", 00:18:54.948 "adrfam": "IPv4", 00:18:54.948 "traddr": "10.0.0.2", 00:18:54.948 "trsvcid": "4420", 00:18:54.948 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:54.948 "prchk_reftag": false, 00:18:54.948 "prchk_guard": false, 00:18:54.948 "ctrlr_loss_timeout_sec": 0, 00:18:54.948 "reconnect_delay_sec": 0, 00:18:54.948 "fast_io_fail_timeout_sec": 0, 00:18:54.948 "psk": "key0", 00:18:54.948 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:54.948 "hdgst": false, 00:18:54.948 "ddgst": false, 00:18:54.948 "multipath": "multipath" 00:18:54.948 } 00:18:54.948 }, 00:18:54.948 { 00:18:54.948 "method": "bdev_nvme_set_hotplug", 00:18:54.948 "params": { 00:18:54.948 "period_us": 100000, 00:18:54.948 "enable": false 00:18:54.948 } 00:18:54.948 }, 00:18:54.948 { 00:18:54.948 "method": "bdev_enable_histogram", 00:18:54.948 "params": { 00:18:54.948 "name": "nvme0n1", 00:18:54.948 "enable": true 00:18:54.948 } 00:18:54.948 }, 00:18:54.948 { 00:18:54.948 "method": "bdev_wait_for_examine" 00:18:54.948 } 00:18:54.948 ] 00:18:54.948 }, 00:18:54.948 { 00:18:54.948 "subsystem": "nbd", 00:18:54.948 "config": [] 00:18:54.948 } 00:18:54.948 ] 00:18:54.948 }' 00:18:54.948 07:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:54.948 07:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.948 [2024-11-20 07:14:59.437593] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:18:54.948 [2024-11-20 07:14:59.437643] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1215107 ] 00:18:55.210 [2024-11-20 07:14:59.512700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.210 [2024-11-20 07:14:59.555161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:55.210 [2024-11-20 07:14:59.710041] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:55.779 07:15:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:55.779 07:15:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:18:55.779 07:15:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:55.779 07:15:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:18:56.038 07:15:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.038 07:15:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:56.297 Running I/O for 1 seconds... 00:18:57.234 5310.00 IOPS, 20.74 MiB/s 00:18:57.234 Latency(us) 00:18:57.234 [2024-11-20T06:15:01.790Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.234 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:57.234 Verification LBA range: start 0x0 length 0x2000 00:18:57.234 nvme0n1 : 1.01 5367.44 20.97 0.00 0.00 23682.53 5385.35 22909.11 00:18:57.234 [2024-11-20T06:15:01.790Z] =================================================================================================================== 00:18:57.234 [2024-11-20T06:15:01.790Z] Total : 5367.44 20.97 0.00 0.00 23682.53 5385.35 22909.11 00:18:57.234 { 00:18:57.234 "results": [ 00:18:57.234 { 00:18:57.234 "job": "nvme0n1", 00:18:57.234 "core_mask": "0x2", 00:18:57.234 "workload": "verify", 00:18:57.234 "status": "finished", 00:18:57.234 "verify_range": { 00:18:57.234 "start": 0, 00:18:57.234 "length": 8192 00:18:57.234 }, 00:18:57.234 "queue_depth": 128, 00:18:57.234 "io_size": 4096, 00:18:57.234 "runtime": 1.013146, 00:18:57.234 "iops": 5367.43963851212, 00:18:57.234 "mibps": 20.96656108793797, 00:18:57.234 "io_failed": 0, 00:18:57.234 "io_timeout": 0, 00:18:57.234 "avg_latency_us": 23682.534005788573, 00:18:57.234 "min_latency_us": 5385.3495652173915, 00:18:57.234 "max_latency_us": 22909.106086956523 00:18:57.234 } 00:18:57.234 ], 00:18:57.234 "core_count": 1 00:18:57.234 } 00:18:57.234 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:18:57.234 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:18:57.234 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:57.234 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:18:57.234 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:18:57.234 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:18:57.234 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:57.234 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:18:57.234 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:18:57.234 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:18:57.234 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:57.234 nvmf_trace.0 00:18:57.234 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:18:57.234 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1215107 00:18:57.234 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1215107 ']' 00:18:57.234 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1215107 00:18:57.234 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:57.234 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:57.234 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1215107 00:18:57.234 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:18:57.234 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:18:57.234 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1215107' 00:18:57.234 killing process with pid 1215107 00:18:57.234 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1215107 00:18:57.234 Received shutdown signal, test time was about 1.000000 seconds 00:18:57.234 00:18:57.234 Latency(us) 00:18:57.234 [2024-11-20T06:15:01.790Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.234 [2024-11-20T06:15:01.790Z] =================================================================================================================== 00:18:57.234 [2024-11-20T06:15:01.790Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:57.234 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1215107 00:18:57.493 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:57.493 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:57.493 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:18:57.493 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:57.493 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:18:57.493 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:57.493 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:57.493 rmmod nvme_tcp 00:18:57.493 rmmod nvme_fabrics 00:18:57.493 rmmod nvme_keyring 00:18:57.493 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:57.493 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:18:57.493 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:18:57.493 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 1214925 ']' 00:18:57.493 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 1214925 00:18:57.493 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1214925 ']' 00:18:57.493 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1214925 00:18:57.493 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:18:57.493 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:57.493 07:15:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1214925 00:18:57.493 07:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:57.493 07:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:57.493 07:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1214925' 00:18:57.493 killing process with pid 1214925 00:18:57.493 07:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1214925 00:18:57.493 07:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1214925 00:18:57.751 07:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:57.751 07:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:57.751 07:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:57.751 07:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:18:57.751 07:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:18:57.751 07:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:57.751 07:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:18:57.751 07:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:57.751 07:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:57.751 07:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.751 07:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:57.751 07:15:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.bG9J9usPLg /tmp/tmp.watr5N2hsK /tmp/tmp.lWWn6jfH0s 00:19:00.288 00:19:00.288 real 1m19.866s 00:19:00.288 user 2m2.703s 00:19:00.288 sys 0m30.388s 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.288 ************************************ 00:19:00.288 END TEST nvmf_tls 00:19:00.288 ************************************ 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:00.288 ************************************ 00:19:00.288 START TEST nvmf_fips 00:19:00.288 ************************************ 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:00.288 * Looking for test storage... 00:19:00.288 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:00.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.288 --rc genhtml_branch_coverage=1 00:19:00.288 --rc genhtml_function_coverage=1 00:19:00.288 --rc genhtml_legend=1 00:19:00.288 --rc geninfo_all_blocks=1 00:19:00.288 --rc geninfo_unexecuted_blocks=1 00:19:00.288 00:19:00.288 ' 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:00.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.288 --rc genhtml_branch_coverage=1 00:19:00.288 --rc genhtml_function_coverage=1 00:19:00.288 --rc genhtml_legend=1 00:19:00.288 --rc geninfo_all_blocks=1 00:19:00.288 --rc geninfo_unexecuted_blocks=1 00:19:00.288 00:19:00.288 ' 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:00.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.288 --rc genhtml_branch_coverage=1 00:19:00.288 --rc genhtml_function_coverage=1 00:19:00.288 --rc genhtml_legend=1 00:19:00.288 --rc geninfo_all_blocks=1 00:19:00.288 --rc geninfo_unexecuted_blocks=1 00:19:00.288 00:19:00.288 ' 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:00.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.288 --rc genhtml_branch_coverage=1 00:19:00.288 --rc genhtml_function_coverage=1 00:19:00.288 --rc genhtml_legend=1 00:19:00.288 --rc geninfo_all_blocks=1 00:19:00.288 --rc geninfo_unexecuted_blocks=1 00:19:00.288 00:19:00.288 ' 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:00.288 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:00.289 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:00.289 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:19:00.290 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:19:00.290 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:19:00.290 Error setting digest 00:19:00.290 405246BE6C7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:19:00.290 405246BE6C7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:19:00.290 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:19:00.290 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:00.290 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:00.290 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:00.290 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:19:00.290 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:00.290 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:00.290 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:00.290 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:00.290 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:00.290 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:00.290 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:00.290 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.290 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:00.290 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:00.290 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:19:00.290 07:15:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:06.861 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:06.861 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:19:06.861 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:06.861 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:06.861 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:06.861 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:06.861 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:06.861 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:06.862 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:06.862 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:06.862 Found net devices under 0000:86:00.0: cvl_0_0 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:06.862 Found net devices under 0000:86:00.1: cvl_0_1 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:06.862 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:06.862 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.331 ms 00:19:06.862 00:19:06.862 --- 10.0.0.2 ping statistics --- 00:19:06.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.862 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:06.862 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:06.862 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:19:06.862 00:19:06.862 --- 10.0.0.1 ping statistics --- 00:19:06.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.862 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:06.862 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:06.863 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:06.863 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:06.863 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:06.863 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=1219568 00:19:06.863 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 1219568 00:19:06.863 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:06.863 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 1219568 ']' 00:19:06.863 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.863 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:06.863 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.863 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:06.863 07:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:06.863 [2024-11-20 07:15:10.766191] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:19:06.863 [2024-11-20 07:15:10.766248] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:06.863 [2024-11-20 07:15:10.844856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.863 [2024-11-20 07:15:10.885617] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:06.863 [2024-11-20 07:15:10.885655] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:06.863 [2024-11-20 07:15:10.885663] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:06.863 [2024-11-20 07:15:10.885669] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:06.863 [2024-11-20 07:15:10.885674] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:06.863 [2024-11-20 07:15:10.886260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:07.122 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:07.122 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:19:07.122 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:07.122 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:07.122 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:07.122 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:07.122 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:07.122 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:07.122 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:07.122 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.NQs 00:19:07.122 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:07.122 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.NQs 00:19:07.122 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.NQs 00:19:07.122 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.NQs 00:19:07.122 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:07.382 [2024-11-20 07:15:11.804653] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:07.382 [2024-11-20 07:15:11.820663] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:07.382 [2024-11-20 07:15:11.820830] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:07.382 malloc0 00:19:07.382 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:07.382 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1219745 00:19:07.382 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1219745 /var/tmp/bdevperf.sock 00:19:07.382 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:07.382 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 1219745 ']' 00:19:07.382 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:07.382 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:07.382 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:07.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:07.382 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:07.382 07:15:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:07.643 [2024-11-20 07:15:11.951890] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:19:07.643 [2024-11-20 07:15:11.951940] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1219745 ] 00:19:07.643 [2024-11-20 07:15:12.028016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.643 [2024-11-20 07:15:12.070332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:08.580 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:08.580 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:19:08.580 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.NQs 00:19:08.580 07:15:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:08.839 [2024-11-20 07:15:13.141630] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:08.839 TLSTESTn1 00:19:08.839 07:15:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:08.839 Running I/O for 10 seconds... 00:19:10.788 5302.00 IOPS, 20.71 MiB/s [2024-11-20T06:15:16.719Z] 5440.00 IOPS, 21.25 MiB/s [2024-11-20T06:15:17.655Z] 5466.00 IOPS, 21.35 MiB/s [2024-11-20T06:15:18.592Z] 5512.50 IOPS, 21.53 MiB/s [2024-11-20T06:15:19.528Z] 5518.20 IOPS, 21.56 MiB/s [2024-11-20T06:15:20.474Z] 5508.83 IOPS, 21.52 MiB/s [2024-11-20T06:15:21.417Z] 5508.00 IOPS, 21.52 MiB/s [2024-11-20T06:15:22.794Z] 5497.50 IOPS, 21.47 MiB/s [2024-11-20T06:15:23.362Z] 5506.78 IOPS, 21.51 MiB/s [2024-11-20T06:15:23.622Z] 5508.50 IOPS, 21.52 MiB/s 00:19:19.066 Latency(us) 00:19:19.066 [2024-11-20T06:15:23.622Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:19.066 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:19.066 Verification LBA range: start 0x0 length 0x2000 00:19:19.066 TLSTESTn1 : 10.02 5512.81 21.53 0.00 0.00 23183.46 5299.87 24048.86 00:19:19.066 [2024-11-20T06:15:23.622Z] =================================================================================================================== 00:19:19.066 [2024-11-20T06:15:23.622Z] Total : 5512.81 21.53 0.00 0.00 23183.46 5299.87 24048.86 00:19:19.066 { 00:19:19.066 "results": [ 00:19:19.066 { 00:19:19.066 "job": "TLSTESTn1", 00:19:19.066 "core_mask": "0x4", 00:19:19.066 "workload": "verify", 00:19:19.066 "status": "finished", 00:19:19.066 "verify_range": { 00:19:19.066 "start": 0, 00:19:19.066 "length": 8192 00:19:19.067 }, 00:19:19.067 "queue_depth": 128, 00:19:19.067 "io_size": 4096, 00:19:19.067 "runtime": 10.01504, 00:19:19.067 "iops": 5512.808735661565, 00:19:19.067 "mibps": 21.53440912367799, 00:19:19.067 "io_failed": 0, 00:19:19.067 "io_timeout": 0, 00:19:19.067 "avg_latency_us": 23183.456315526284, 00:19:19.067 "min_latency_us": 5299.8678260869565, 00:19:19.067 "max_latency_us": 24048.862608695654 00:19:19.067 } 00:19:19.067 ], 00:19:19.067 "core_count": 1 00:19:19.067 } 00:19:19.067 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:19.067 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:19.067 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:19:19.067 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:19:19.067 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:19:19.067 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:19.067 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:19:19.067 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:19:19.067 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:19:19.067 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:19.067 nvmf_trace.0 00:19:19.067 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:19:19.067 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1219745 00:19:19.067 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 1219745 ']' 00:19:19.067 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 1219745 00:19:19.067 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:19:19.067 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:19.067 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1219745 00:19:19.067 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:19.067 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:19.067 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1219745' 00:19:19.067 killing process with pid 1219745 00:19:19.067 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 1219745 00:19:19.067 Received shutdown signal, test time was about 10.000000 seconds 00:19:19.067 00:19:19.067 Latency(us) 00:19:19.067 [2024-11-20T06:15:23.623Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:19.067 [2024-11-20T06:15:23.623Z] =================================================================================================================== 00:19:19.067 [2024-11-20T06:15:23.623Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:19.067 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 1219745 00:19:19.327 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:19.327 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:19.327 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:19.327 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:19.327 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:19.327 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:19.327 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:19.327 rmmod nvme_tcp 00:19:19.327 rmmod nvme_fabrics 00:19:19.327 rmmod nvme_keyring 00:19:19.327 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:19.327 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:19.327 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:19.327 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 1219568 ']' 00:19:19.327 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 1219568 00:19:19.327 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 1219568 ']' 00:19:19.327 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 1219568 00:19:19.327 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:19:19.327 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:19.327 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1219568 00:19:19.327 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:19.327 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:19.327 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1219568' 00:19:19.327 killing process with pid 1219568 00:19:19.327 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 1219568 00:19:19.327 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 1219568 00:19:19.586 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:19.586 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:19.586 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:19.587 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:19:19.587 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:19:19.587 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:19.587 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:19:19.587 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:19.587 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:19.587 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:19.587 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:19.587 07:15:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.NQs 00:19:22.128 00:19:22.128 real 0m21.711s 00:19:22.128 user 0m23.691s 00:19:22.128 sys 0m9.470s 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:22.128 ************************************ 00:19:22.128 END TEST nvmf_fips 00:19:22.128 ************************************ 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:22.128 ************************************ 00:19:22.128 START TEST nvmf_control_msg_list 00:19:22.128 ************************************ 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:22.128 * Looking for test storage... 00:19:22.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:22.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.128 --rc genhtml_branch_coverage=1 00:19:22.128 --rc genhtml_function_coverage=1 00:19:22.128 --rc genhtml_legend=1 00:19:22.128 --rc geninfo_all_blocks=1 00:19:22.128 --rc geninfo_unexecuted_blocks=1 00:19:22.128 00:19:22.128 ' 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:22.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.128 --rc genhtml_branch_coverage=1 00:19:22.128 --rc genhtml_function_coverage=1 00:19:22.128 --rc genhtml_legend=1 00:19:22.128 --rc geninfo_all_blocks=1 00:19:22.128 --rc geninfo_unexecuted_blocks=1 00:19:22.128 00:19:22.128 ' 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:22.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.128 --rc genhtml_branch_coverage=1 00:19:22.128 --rc genhtml_function_coverage=1 00:19:22.128 --rc genhtml_legend=1 00:19:22.128 --rc geninfo_all_blocks=1 00:19:22.128 --rc geninfo_unexecuted_blocks=1 00:19:22.128 00:19:22.128 ' 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:22.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.128 --rc genhtml_branch_coverage=1 00:19:22.128 --rc genhtml_function_coverage=1 00:19:22.128 --rc genhtml_legend=1 00:19:22.128 --rc geninfo_all_blocks=1 00:19:22.128 --rc geninfo_unexecuted_blocks=1 00:19:22.128 00:19:22.128 ' 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:22.128 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:22.129 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:22.129 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.129 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.129 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.129 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:22.129 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.129 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:22.129 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:22.129 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:22.129 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:22.129 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:22.129 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:22.129 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:22.129 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:22.129 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:22.129 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:22.129 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:22.129 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:22.129 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:22.129 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:22.129 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:22.129 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:22.129 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:22.129 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:22.129 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:22.129 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:22.129 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:22.129 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:22.129 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:19:22.129 07:15:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:27.458 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:27.458 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:19:27.458 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:27.458 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:27.458 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:27.458 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:27.458 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:27.458 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:19:27.458 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:27.458 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:19:27.458 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:19:27.458 07:15:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:19:27.458 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:19:27.458 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:19:27.458 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:19:27.458 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:27.458 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:27.458 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:27.458 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:27.458 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:27.458 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:27.458 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:27.458 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:27.458 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:27.458 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:27.458 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:27.458 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:27.458 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:27.458 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:27.458 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:27.459 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:27.459 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:27.459 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:27.459 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:27.459 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:27.459 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:27.459 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:27.459 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:27.459 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:27.459 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:27.459 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:27.718 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:27.718 Found net devices under 0000:86:00.0: cvl_0_0 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:27.718 Found net devices under 0000:86:00.1: cvl_0_1 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:27.718 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:27.719 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:27.719 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:27.719 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:27.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:27.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.455 ms 00:19:27.719 00:19:27.719 --- 10.0.0.2 ping statistics --- 00:19:27.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.719 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:19:27.719 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:27.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:27.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:19:27.719 00:19:27.719 --- 10.0.0.1 ping statistics --- 00:19:27.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.719 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:19:27.719 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:27.719 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:19:27.719 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:27.719 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:27.719 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:27.719 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:27.719 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:27.719 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:27.719 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:27.978 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:27.978 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:27.978 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:27.978 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:27.978 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=1225329 00:19:27.978 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 1225329 00:19:27.978 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:27.978 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 1225329 ']' 00:19:27.978 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.978 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:27.978 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.978 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:27.978 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:27.978 [2024-11-20 07:15:32.343519] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:19:27.978 [2024-11-20 07:15:32.343570] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:27.978 [2024-11-20 07:15:32.426716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.978 [2024-11-20 07:15:32.468236] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:27.978 [2024-11-20 07:15:32.468268] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:27.978 [2024-11-20 07:15:32.468280] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:27.978 [2024-11-20 07:15:32.468286] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:27.978 [2024-11-20 07:15:32.468291] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:27.978 [2024-11-20 07:15:32.468844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.237 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:28.237 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:19:28.237 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:28.237 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:28.237 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:28.237 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:28.237 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:28.237 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:28.237 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:28.237 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.237 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:28.237 [2024-11-20 07:15:32.618050] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:28.237 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.237 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:19:28.237 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.237 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:28.237 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.237 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:28.237 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.237 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:28.237 Malloc0 00:19:28.237 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.237 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:28.237 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.237 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:28.237 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.237 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:28.237 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.237 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:28.237 [2024-11-20 07:15:32.658409] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:28.237 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.237 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1225352 00:19:28.237 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:28.237 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1225353 00:19:28.237 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:28.237 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1225354 00:19:28.237 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1225352 00:19:28.237 07:15:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:28.237 [2024-11-20 07:15:32.736842] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:28.237 [2024-11-20 07:15:32.746916] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:28.237 [2024-11-20 07:15:32.747090] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:29.614 Initializing NVMe Controllers 00:19:29.614 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:29.614 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:19:29.614 Initialization complete. Launching workers. 00:19:29.614 ======================================================== 00:19:29.614 Latency(us) 00:19:29.614 Device Information : IOPS MiB/s Average min max 00:19:29.614 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40943.31 40726.01 41900.77 00:19:29.614 ======================================================== 00:19:29.614 Total : 25.00 0.10 40943.31 40726.01 41900.77 00:19:29.614 00:19:29.614 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1225353 00:19:29.614 Initializing NVMe Controllers 00:19:29.614 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:29.614 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:19:29.614 Initialization complete. Launching workers. 00:19:29.614 ======================================================== 00:19:29.614 Latency(us) 00:19:29.614 Device Information : IOPS MiB/s Average min max 00:19:29.614 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 6812.00 26.61 146.45 130.29 364.66 00:19:29.614 ======================================================== 00:19:29.614 Total : 6812.00 26.61 146.45 130.29 364.66 00:19:29.614 00:19:29.614 [2024-11-20 07:15:33.850729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfe9d0 is same with the state(6) to be set 00:19:29.614 Initializing NVMe Controllers 00:19:29.614 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:29.614 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:19:29.614 Initialization complete. Launching workers. 00:19:29.614 ======================================================== 00:19:29.614 Latency(us) 00:19:29.614 Device Information : IOPS MiB/s Average min max 00:19:29.614 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 289.00 1.13 3457.74 160.73 41158.63 00:19:29.614 ======================================================== 00:19:29.614 Total : 289.00 1.13 3457.74 160.73 41158.63 00:19:29.614 00:19:29.614 [2024-11-20 07:15:33.890833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfe2d0 is same with the state(6) to be set 00:19:29.614 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1225354 00:19:29.614 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:29.614 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:19:29.614 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:29.614 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:19:29.614 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:29.614 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:19:29.614 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:29.614 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:29.614 rmmod nvme_tcp 00:19:29.614 rmmod nvme_fabrics 00:19:29.614 rmmod nvme_keyring 00:19:29.614 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:29.614 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:19:29.614 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:19:29.614 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 1225329 ']' 00:19:29.614 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 1225329 00:19:29.615 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 1225329 ']' 00:19:29.615 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 1225329 00:19:29.615 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:19:29.615 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:29.615 07:15:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1225329 00:19:29.615 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:29.615 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:29.615 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1225329' 00:19:29.615 killing process with pid 1225329 00:19:29.615 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 1225329 00:19:29.615 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 1225329 00:19:29.874 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:29.874 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:29.874 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:29.874 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:19:29.874 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:19:29.874 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:29.874 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:19:29.874 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:29.874 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:29.874 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:29.874 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:29.874 07:15:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:31.781 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:31.781 00:19:31.781 real 0m10.121s 00:19:31.781 user 0m6.621s 00:19:31.781 sys 0m5.433s 00:19:31.781 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:31.781 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:31.781 ************************************ 00:19:31.781 END TEST nvmf_control_msg_list 00:19:31.781 ************************************ 00:19:31.781 07:15:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:31.781 07:15:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:31.781 07:15:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:31.781 07:15:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:31.781 ************************************ 00:19:31.781 START TEST nvmf_wait_for_buf 00:19:31.781 ************************************ 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:32.041 * Looking for test storage... 00:19:32.041 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:32.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.041 --rc genhtml_branch_coverage=1 00:19:32.041 --rc genhtml_function_coverage=1 00:19:32.041 --rc genhtml_legend=1 00:19:32.041 --rc geninfo_all_blocks=1 00:19:32.041 --rc geninfo_unexecuted_blocks=1 00:19:32.041 00:19:32.041 ' 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:32.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.041 --rc genhtml_branch_coverage=1 00:19:32.041 --rc genhtml_function_coverage=1 00:19:32.041 --rc genhtml_legend=1 00:19:32.041 --rc geninfo_all_blocks=1 00:19:32.041 --rc geninfo_unexecuted_blocks=1 00:19:32.041 00:19:32.041 ' 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:32.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.041 --rc genhtml_branch_coverage=1 00:19:32.041 --rc genhtml_function_coverage=1 00:19:32.041 --rc genhtml_legend=1 00:19:32.041 --rc geninfo_all_blocks=1 00:19:32.041 --rc geninfo_unexecuted_blocks=1 00:19:32.041 00:19:32.041 ' 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:32.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:32.041 --rc genhtml_branch_coverage=1 00:19:32.041 --rc genhtml_function_coverage=1 00:19:32.041 --rc genhtml_legend=1 00:19:32.041 --rc geninfo_all_blocks=1 00:19:32.041 --rc geninfo_unexecuted_blocks=1 00:19:32.041 00:19:32.041 ' 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:32.041 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:32.042 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:32.042 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:32.042 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:32.042 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:32.042 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:32.042 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:32.042 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:32.042 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:32.042 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:32.042 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:32.042 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:32.042 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:32.042 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:32.042 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:32.042 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.042 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.042 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.042 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:19:32.042 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.042 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:19:32.042 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:32.042 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:32.042 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:32.042 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:32.042 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:32.042 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:32.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:32.042 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:32.042 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:32.042 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:32.042 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:19:32.042 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:32.042 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:32.042 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:32.042 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:32.042 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:32.042 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.042 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:32.042 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.042 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:32.042 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:32.042 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:19:32.042 07:15:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:38.612 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:38.612 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:38.612 Found net devices under 0000:86:00.0: cvl_0_0 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:38.612 Found net devices under 0000:86:00.1: cvl_0_1 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:38.612 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:38.613 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:38.613 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.436 ms 00:19:38.613 00:19:38.613 --- 10.0.0.2 ping statistics --- 00:19:38.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.613 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:38.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:38.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:19:38.613 00:19:38.613 --- 10.0.0.1 ping statistics --- 00:19:38.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.613 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=1229111 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 1229111 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 1229111 ']' 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:38.613 07:15:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:38.613 [2024-11-20 07:15:42.533575] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:19:38.613 [2024-11-20 07:15:42.533619] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:38.613 [2024-11-20 07:15:42.613053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.613 [2024-11-20 07:15:42.654530] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:38.613 [2024-11-20 07:15:42.654567] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:38.613 [2024-11-20 07:15:42.654575] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:38.613 [2024-11-20 07:15:42.654582] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:38.613 [2024-11-20 07:15:42.654588] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:38.613 [2024-11-20 07:15:42.655146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.872 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:38.872 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:19:38.872 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:38.872 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:38.872 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:38.872 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:38.872 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:38.872 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:38.872 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:19:38.872 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.872 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:39.131 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.131 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:19:39.131 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.131 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:39.131 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.131 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:19:39.131 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.131 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:39.131 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.131 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:39.131 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.131 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:39.131 Malloc0 00:19:39.131 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.131 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:19:39.131 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.131 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:39.131 [2024-11-20 07:15:43.524278] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:39.131 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.131 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:39.131 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.131 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:39.131 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.131 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:39.131 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.131 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:39.131 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.131 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:39.131 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.131 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:39.132 [2024-11-20 07:15:43.552487] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:39.132 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.132 07:15:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:39.132 [2024-11-20 07:15:43.639698] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:40.511 Initializing NVMe Controllers 00:19:40.511 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:40.511 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:40.511 Initialization complete. Launching workers. 00:19:40.511 ======================================================== 00:19:40.511 Latency(us) 00:19:40.511 Device Information : IOPS MiB/s Average min max 00:19:40.511 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32269.95 7282.67 63839.44 00:19:40.511 ======================================================== 00:19:40.511 Total : 129.00 16.12 32269.95 7282.67 63839.44 00:19:40.511 00:19:40.511 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:19:40.511 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:19:40.511 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.511 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:40.511 07:15:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.511 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:19:40.511 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:19:40.511 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:40.511 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:19:40.511 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:40.511 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:19:40.511 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:40.511 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:19:40.511 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:40.511 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:40.511 rmmod nvme_tcp 00:19:40.511 rmmod nvme_fabrics 00:19:40.770 rmmod nvme_keyring 00:19:40.770 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:40.770 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:19:40.770 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:19:40.770 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 1229111 ']' 00:19:40.770 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 1229111 00:19:40.770 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 1229111 ']' 00:19:40.770 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 1229111 00:19:40.770 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:19:40.770 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:40.770 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1229111 00:19:40.770 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:40.770 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:40.770 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1229111' 00:19:40.770 killing process with pid 1229111 00:19:40.770 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 1229111 00:19:40.770 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 1229111 00:19:40.770 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:40.770 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:40.770 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:40.770 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:19:40.770 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:19:40.770 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:40.770 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:19:40.770 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:40.770 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:40.770 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.771 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:40.771 07:15:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:43.305 07:15:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:43.305 00:19:43.305 real 0m11.034s 00:19:43.305 user 0m4.711s 00:19:43.305 sys 0m4.967s 00:19:43.305 07:15:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:43.305 07:15:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:43.305 ************************************ 00:19:43.305 END TEST nvmf_wait_for_buf 00:19:43.305 ************************************ 00:19:43.305 07:15:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:19:43.305 07:15:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:19:43.305 07:15:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:19:43.305 07:15:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:19:43.305 07:15:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:19:43.305 07:15:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:48.582 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:48.582 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:19:48.582 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:48.582 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:48.582 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:48.582 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:48.582 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:48.582 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:19:48.582 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:48.582 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:19:48.582 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:19:48.582 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:19:48.582 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:19:48.582 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:19:48.582 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:19:48.582 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:48.582 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:48.582 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:48.583 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:48.583 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:48.583 Found net devices under 0000:86:00.0: cvl_0_0 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:48.583 Found net devices under 0000:86:00.1: cvl_0_1 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:48.583 ************************************ 00:19:48.583 START TEST nvmf_perf_adq 00:19:48.583 ************************************ 00:19:48.583 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:48.844 * Looking for test storage... 00:19:48.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:48.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.844 --rc genhtml_branch_coverage=1 00:19:48.844 --rc genhtml_function_coverage=1 00:19:48.844 --rc genhtml_legend=1 00:19:48.844 --rc geninfo_all_blocks=1 00:19:48.844 --rc geninfo_unexecuted_blocks=1 00:19:48.844 00:19:48.844 ' 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:48.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.844 --rc genhtml_branch_coverage=1 00:19:48.844 --rc genhtml_function_coverage=1 00:19:48.844 --rc genhtml_legend=1 00:19:48.844 --rc geninfo_all_blocks=1 00:19:48.844 --rc geninfo_unexecuted_blocks=1 00:19:48.844 00:19:48.844 ' 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:48.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.844 --rc genhtml_branch_coverage=1 00:19:48.844 --rc genhtml_function_coverage=1 00:19:48.844 --rc genhtml_legend=1 00:19:48.844 --rc geninfo_all_blocks=1 00:19:48.844 --rc geninfo_unexecuted_blocks=1 00:19:48.844 00:19:48.844 ' 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:48.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.844 --rc genhtml_branch_coverage=1 00:19:48.844 --rc genhtml_function_coverage=1 00:19:48.844 --rc genhtml_legend=1 00:19:48.844 --rc geninfo_all_blocks=1 00:19:48.844 --rc geninfo_unexecuted_blocks=1 00:19:48.844 00:19:48.844 ' 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:48.844 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:48.845 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:48.845 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:48.845 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:48.845 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:48.845 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:48.845 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:48.845 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:19:48.845 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:48.845 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:48.845 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:48.845 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.845 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.845 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.845 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:48.845 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.845 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:19:48.845 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:48.845 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:48.845 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:48.845 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:48.845 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:48.845 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:48.845 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:48.845 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:48.845 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:48.845 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:48.845 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:48.845 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:48.845 07:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:55.417 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:55.417 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.417 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:55.417 Found net devices under 0000:86:00.0: cvl_0_0 00:19:55.418 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.418 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:55.418 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.418 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:55.418 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:55.418 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:55.418 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:55.418 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.418 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:55.418 Found net devices under 0000:86:00.1: cvl_0_1 00:19:55.418 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.418 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:55.418 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:55.418 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:55.418 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:55.418 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:19:55.418 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:19:55.418 07:15:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:19:55.677 07:16:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:19:57.580 07:16:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:02.854 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:20:02.854 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:02.854 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:02.854 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:02.854 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:02.854 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:02.854 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:02.854 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:02.854 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.854 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:02.854 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:02.855 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:02.855 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:02.855 Found net devices under 0000:86:00.0: cvl_0_0 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:02.855 Found net devices under 0000:86:00.1: cvl_0_1 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:02.855 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:02.856 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:02.856 07:16:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:02.856 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:02.856 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:02.856 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:02.856 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:02.856 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:02.856 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:02.856 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:02.856 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:02.856 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:02.856 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.458 ms 00:20:02.856 00:20:02.856 --- 10.0.0.2 ping statistics --- 00:20:02.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.856 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:20:02.856 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:02.856 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:02.856 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:20:02.856 00:20:02.856 --- 10.0.0.1 ping statistics --- 00:20:02.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.856 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:20:02.856 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:02.856 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:02.856 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:02.856 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:02.856 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:02.856 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:02.856 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:02.856 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:02.856 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:02.856 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:02.856 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:02.856 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:02.856 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:02.856 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1237458 00:20:02.856 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1237458 00:20:02.856 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:02.856 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 1237458 ']' 00:20:02.856 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.856 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:02.856 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.856 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:02.856 07:16:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:02.856 [2024-11-20 07:16:07.318974] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:20:02.856 [2024-11-20 07:16:07.319018] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:02.856 [2024-11-20 07:16:07.398040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:03.115 [2024-11-20 07:16:07.442783] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:03.115 [2024-11-20 07:16:07.442818] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:03.115 [2024-11-20 07:16:07.442825] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:03.115 [2024-11-20 07:16:07.442831] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:03.115 [2024-11-20 07:16:07.442836] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:03.115 [2024-11-20 07:16:07.444278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.115 [2024-11-20 07:16:07.444386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:03.115 [2024-11-20 07:16:07.444495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.115 [2024-11-20 07:16:07.444496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:03.681 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:03.681 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:20:03.681 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:03.681 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:03.681 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:03.681 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:03.681 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:20:03.681 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:03.681 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:03.681 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.681 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:03.681 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.940 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:03.940 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:03.940 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.940 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:03.940 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.940 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:03.940 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.940 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:03.940 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.940 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:03.940 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.940 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:03.940 [2024-11-20 07:16:08.333662] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:03.940 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.940 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:03.940 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.940 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:03.940 Malloc1 00:20:03.940 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.940 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:03.940 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.940 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:03.940 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.940 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:03.940 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.940 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:03.940 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.940 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:03.940 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.940 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:03.940 [2024-11-20 07:16:08.399836] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:03.940 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.940 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1237707 00:20:03.940 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:20:03.940 07:16:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:06.465 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:20:06.465 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.465 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.465 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.465 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:20:06.465 "tick_rate": 2300000000, 00:20:06.465 "poll_groups": [ 00:20:06.465 { 00:20:06.465 "name": "nvmf_tgt_poll_group_000", 00:20:06.465 "admin_qpairs": 1, 00:20:06.465 "io_qpairs": 1, 00:20:06.465 "current_admin_qpairs": 1, 00:20:06.465 "current_io_qpairs": 1, 00:20:06.465 "pending_bdev_io": 0, 00:20:06.465 "completed_nvme_io": 19402, 00:20:06.465 "transports": [ 00:20:06.465 { 00:20:06.465 "trtype": "TCP" 00:20:06.465 } 00:20:06.466 ] 00:20:06.466 }, 00:20:06.466 { 00:20:06.466 "name": "nvmf_tgt_poll_group_001", 00:20:06.466 "admin_qpairs": 0, 00:20:06.466 "io_qpairs": 1, 00:20:06.466 "current_admin_qpairs": 0, 00:20:06.466 "current_io_qpairs": 1, 00:20:06.466 "pending_bdev_io": 0, 00:20:06.466 "completed_nvme_io": 19381, 00:20:06.466 "transports": [ 00:20:06.466 { 00:20:06.466 "trtype": "TCP" 00:20:06.466 } 00:20:06.466 ] 00:20:06.466 }, 00:20:06.466 { 00:20:06.466 "name": "nvmf_tgt_poll_group_002", 00:20:06.466 "admin_qpairs": 0, 00:20:06.466 "io_qpairs": 1, 00:20:06.466 "current_admin_qpairs": 0, 00:20:06.466 "current_io_qpairs": 1, 00:20:06.466 "pending_bdev_io": 0, 00:20:06.466 "completed_nvme_io": 19636, 00:20:06.466 "transports": [ 00:20:06.466 { 00:20:06.466 "trtype": "TCP" 00:20:06.466 } 00:20:06.466 ] 00:20:06.466 }, 00:20:06.466 { 00:20:06.466 "name": "nvmf_tgt_poll_group_003", 00:20:06.466 "admin_qpairs": 0, 00:20:06.466 "io_qpairs": 1, 00:20:06.466 "current_admin_qpairs": 0, 00:20:06.466 "current_io_qpairs": 1, 00:20:06.466 "pending_bdev_io": 0, 00:20:06.466 "completed_nvme_io": 19128, 00:20:06.466 "transports": [ 00:20:06.466 { 00:20:06.466 "trtype": "TCP" 00:20:06.466 } 00:20:06.466 ] 00:20:06.466 } 00:20:06.466 ] 00:20:06.466 }' 00:20:06.466 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:06.466 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:20:06.466 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:20:06.466 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:20:06.466 07:16:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1237707 00:20:14.570 Initializing NVMe Controllers 00:20:14.570 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:14.570 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:14.570 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:14.570 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:14.570 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:14.570 Initialization complete. Launching workers. 00:20:14.570 ======================================================== 00:20:14.570 Latency(us) 00:20:14.570 Device Information : IOPS MiB/s Average min max 00:20:14.570 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10233.20 39.97 6254.51 2485.11 10506.75 00:20:14.570 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10328.90 40.35 6197.28 2328.29 10664.29 00:20:14.570 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10413.50 40.68 6147.37 1912.94 10393.32 00:20:14.570 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10317.70 40.30 6203.66 2475.36 10584.98 00:20:14.570 ======================================================== 00:20:14.570 Total : 41293.29 161.30 6200.47 1912.94 10664.29 00:20:14.570 00:20:14.570 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:20:14.570 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:14.570 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:14.570 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:14.570 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:14.570 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:14.570 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:14.570 rmmod nvme_tcp 00:20:14.570 rmmod nvme_fabrics 00:20:14.570 rmmod nvme_keyring 00:20:14.571 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:14.571 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:14.571 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:14.571 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1237458 ']' 00:20:14.571 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1237458 00:20:14.571 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 1237458 ']' 00:20:14.571 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 1237458 00:20:14.571 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:20:14.571 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:14.571 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1237458 00:20:14.571 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:14.571 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:14.571 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1237458' 00:20:14.571 killing process with pid 1237458 00:20:14.571 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 1237458 00:20:14.571 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 1237458 00:20:14.571 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:14.571 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:14.571 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:14.571 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:14.571 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:14.571 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:14.571 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:14.571 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:14.571 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:14.571 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:14.571 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:14.571 07:16:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.487 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:16.487 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:20:16.487 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:16.487 07:16:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:17.862 07:16:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:19.761 07:16:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:25.032 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:20:25.032 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:25.032 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:25.032 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:25.032 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:25.032 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:25.032 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.032 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:25.032 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.032 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:25.032 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:25.032 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:25.032 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:25.032 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:25.032 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:25.032 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:25.032 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:25.032 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:25.032 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:25.032 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:25.032 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:25.032 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:25.032 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:25.032 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:25.032 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:25.032 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:25.033 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:25.033 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:25.033 Found net devices under 0000:86:00.0: cvl_0_0 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:25.033 Found net devices under 0000:86:00.1: cvl_0_1 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:25.033 07:16:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:25.033 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:25.033 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:25.033 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:25.033 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:25.033 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:25.033 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:25.033 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:25.033 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:25.033 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:25.033 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.443 ms 00:20:25.033 00:20:25.033 --- 10.0.0.2 ping statistics --- 00:20:25.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.033 rtt min/avg/max/mdev = 0.443/0.443/0.443/0.000 ms 00:20:25.034 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:25.034 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:25.034 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:20:25.034 00:20:25.034 --- 10.0.0.1 ping statistics --- 00:20:25.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.034 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:20:25.034 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:25.034 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:25.034 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:25.034 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:25.034 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:25.034 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:25.034 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:25.034 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:25.034 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:25.034 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:20:25.034 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:25.034 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:25.034 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:25.034 net.core.busy_poll = 1 00:20:25.034 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:25.034 net.core.busy_read = 1 00:20:25.034 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:25.034 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:25.034 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:25.034 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:25.034 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:25.034 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:25.034 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:25.034 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:25.034 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:25.034 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1241484 00:20:25.034 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1241484 00:20:25.034 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:25.034 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 1241484 ']' 00:20:25.034 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:25.034 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:25.034 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:25.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:25.034 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:25.034 07:16:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:25.034 [2024-11-20 07:16:29.544242] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:20:25.034 [2024-11-20 07:16:29.544292] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:25.292 [2024-11-20 07:16:29.625117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:25.292 [2024-11-20 07:16:29.665908] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:25.292 [2024-11-20 07:16:29.665951] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:25.292 [2024-11-20 07:16:29.665958] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:25.292 [2024-11-20 07:16:29.665964] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:25.292 [2024-11-20 07:16:29.665968] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:25.292 [2024-11-20 07:16:29.667511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:25.292 [2024-11-20 07:16:29.667619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:25.292 [2024-11-20 07:16:29.667748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.292 [2024-11-20 07:16:29.667749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:25.856 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:25.856 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:20:25.856 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:25.856 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:25.856 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.115 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:26.115 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:20:26.115 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:26.115 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:26.115 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.115 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.115 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.115 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:26.115 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:26.115 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.115 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.115 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.115 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:26.115 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.115 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.115 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.115 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:26.115 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.115 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.115 [2024-11-20 07:16:30.564347] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:26.115 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.115 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:26.115 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.115 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.115 Malloc1 00:20:26.115 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.115 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:26.115 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.115 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.115 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.115 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:26.115 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.115 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.115 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.115 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:26.115 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.115 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:26.115 [2024-11-20 07:16:30.635694] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:26.115 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.115 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1241640 00:20:26.115 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:20:26.115 07:16:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:28.637 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:20:28.637 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.637 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:28.637 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.637 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:20:28.637 "tick_rate": 2300000000, 00:20:28.637 "poll_groups": [ 00:20:28.637 { 00:20:28.637 "name": "nvmf_tgt_poll_group_000", 00:20:28.637 "admin_qpairs": 1, 00:20:28.637 "io_qpairs": 2, 00:20:28.637 "current_admin_qpairs": 1, 00:20:28.637 "current_io_qpairs": 2, 00:20:28.637 "pending_bdev_io": 0, 00:20:28.637 "completed_nvme_io": 28226, 00:20:28.637 "transports": [ 00:20:28.637 { 00:20:28.637 "trtype": "TCP" 00:20:28.637 } 00:20:28.637 ] 00:20:28.637 }, 00:20:28.637 { 00:20:28.637 "name": "nvmf_tgt_poll_group_001", 00:20:28.637 "admin_qpairs": 0, 00:20:28.637 "io_qpairs": 2, 00:20:28.637 "current_admin_qpairs": 0, 00:20:28.637 "current_io_qpairs": 2, 00:20:28.637 "pending_bdev_io": 0, 00:20:28.637 "completed_nvme_io": 28063, 00:20:28.637 "transports": [ 00:20:28.637 { 00:20:28.637 "trtype": "TCP" 00:20:28.637 } 00:20:28.637 ] 00:20:28.637 }, 00:20:28.637 { 00:20:28.637 "name": "nvmf_tgt_poll_group_002", 00:20:28.637 "admin_qpairs": 0, 00:20:28.637 "io_qpairs": 0, 00:20:28.637 "current_admin_qpairs": 0, 00:20:28.637 "current_io_qpairs": 0, 00:20:28.637 "pending_bdev_io": 0, 00:20:28.637 "completed_nvme_io": 0, 00:20:28.637 "transports": [ 00:20:28.637 { 00:20:28.637 "trtype": "TCP" 00:20:28.637 } 00:20:28.637 ] 00:20:28.637 }, 00:20:28.637 { 00:20:28.637 "name": "nvmf_tgt_poll_group_003", 00:20:28.637 "admin_qpairs": 0, 00:20:28.637 "io_qpairs": 0, 00:20:28.637 "current_admin_qpairs": 0, 00:20:28.637 "current_io_qpairs": 0, 00:20:28.637 "pending_bdev_io": 0, 00:20:28.637 "completed_nvme_io": 0, 00:20:28.637 "transports": [ 00:20:28.637 { 00:20:28.637 "trtype": "TCP" 00:20:28.637 } 00:20:28.637 ] 00:20:28.637 } 00:20:28.637 ] 00:20:28.637 }' 00:20:28.637 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:28.637 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:20:28.637 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:20:28.637 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:20:28.637 07:16:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1241640 00:20:36.738 Initializing NVMe Controllers 00:20:36.738 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:36.738 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:36.738 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:36.738 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:36.738 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:36.738 Initialization complete. Launching workers. 00:20:36.738 ======================================================== 00:20:36.738 Latency(us) 00:20:36.738 Device Information : IOPS MiB/s Average min max 00:20:36.738 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8677.60 33.90 7374.65 1380.00 51859.97 00:20:36.738 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7817.40 30.54 8185.79 1148.55 53475.64 00:20:36.738 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7091.50 27.70 9024.35 1548.90 52193.63 00:20:36.738 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6296.40 24.60 10164.03 1685.92 55480.18 00:20:36.738 ======================================================== 00:20:36.738 Total : 29882.90 116.73 8566.06 1148.55 55480.18 00:20:36.738 00:20:36.738 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:20:36.738 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:36.738 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:36.738 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:36.738 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:36.738 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:36.738 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:36.738 rmmod nvme_tcp 00:20:36.738 rmmod nvme_fabrics 00:20:36.738 rmmod nvme_keyring 00:20:36.738 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:36.738 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:36.738 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:36.738 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1241484 ']' 00:20:36.738 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1241484 00:20:36.738 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 1241484 ']' 00:20:36.738 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 1241484 00:20:36.738 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:20:36.738 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:36.738 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1241484 00:20:36.738 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:36.738 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:36.738 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1241484' 00:20:36.738 killing process with pid 1241484 00:20:36.738 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 1241484 00:20:36.738 07:16:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 1241484 00:20:36.738 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:36.738 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:36.738 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:36.738 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:36.738 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:36.738 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:36.738 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:36.738 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:36.738 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:36.738 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.738 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:36.738 07:16:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:20:40.027 00:20:40.027 real 0m51.114s 00:20:40.027 user 2m49.256s 00:20:40.027 sys 0m10.612s 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:40.027 ************************************ 00:20:40.027 END TEST nvmf_perf_adq 00:20:40.027 ************************************ 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:40.027 ************************************ 00:20:40.027 START TEST nvmf_shutdown 00:20:40.027 ************************************ 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:40.027 * Looking for test storage... 00:20:40.027 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:40.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.027 --rc genhtml_branch_coverage=1 00:20:40.027 --rc genhtml_function_coverage=1 00:20:40.027 --rc genhtml_legend=1 00:20:40.027 --rc geninfo_all_blocks=1 00:20:40.027 --rc geninfo_unexecuted_blocks=1 00:20:40.027 00:20:40.027 ' 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:40.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.027 --rc genhtml_branch_coverage=1 00:20:40.027 --rc genhtml_function_coverage=1 00:20:40.027 --rc genhtml_legend=1 00:20:40.027 --rc geninfo_all_blocks=1 00:20:40.027 --rc geninfo_unexecuted_blocks=1 00:20:40.027 00:20:40.027 ' 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:40.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.027 --rc genhtml_branch_coverage=1 00:20:40.027 --rc genhtml_function_coverage=1 00:20:40.027 --rc genhtml_legend=1 00:20:40.027 --rc geninfo_all_blocks=1 00:20:40.027 --rc geninfo_unexecuted_blocks=1 00:20:40.027 00:20:40.027 ' 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:40.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.027 --rc genhtml_branch_coverage=1 00:20:40.027 --rc genhtml_function_coverage=1 00:20:40.027 --rc genhtml_legend=1 00:20:40.027 --rc geninfo_all_blocks=1 00:20:40.027 --rc geninfo_unexecuted_blocks=1 00:20:40.027 00:20:40.027 ' 00:20:40.027 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:40.028 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:40.028 ************************************ 00:20:40.028 START TEST nvmf_shutdown_tc1 00:20:40.028 ************************************ 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc1 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:40.028 07:16:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:46.728 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:46.728 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:46.728 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:46.728 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:46.728 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:46.728 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:46.728 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:46.728 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:20:46.728 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:46.728 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:20:46.728 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:46.729 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:46.729 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:46.729 Found net devices under 0000:86:00.0: cvl_0_0 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:46.729 Found net devices under 0000:86:00.1: cvl_0_1 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:46.729 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:46.730 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:46.730 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.398 ms 00:20:46.730 00:20:46.730 --- 10.0.0.2 ping statistics --- 00:20:46.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.730 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:46.730 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:46.730 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:20:46.730 00:20:46.730 --- 10.0.0.1 ping statistics --- 00:20:46.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.730 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=1247015 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 1247015 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 1247015 ']' 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:46.730 [2024-11-20 07:16:50.509167] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:20:46.730 [2024-11-20 07:16:50.509221] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:46.730 [2024-11-20 07:16:50.591968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:46.730 [2024-11-20 07:16:50.635505] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:46.730 [2024-11-20 07:16:50.635546] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:46.730 [2024-11-20 07:16:50.635553] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:46.730 [2024-11-20 07:16:50.635559] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:46.730 [2024-11-20 07:16:50.635565] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:46.730 [2024-11-20 07:16:50.637062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:46.730 [2024-11-20 07:16:50.637176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:46.730 [2024-11-20 07:16:50.637283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.730 [2024-11-20 07:16:50.637284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:46.730 [2024-11-20 07:16:50.776872] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:46.730 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:46.731 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:46.731 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:46.731 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:46.731 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:46.731 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:46.731 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:46.731 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:46.731 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.731 07:16:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:46.731 Malloc1 00:20:46.731 [2024-11-20 07:16:50.888232] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:46.731 Malloc2 00:20:46.731 Malloc3 00:20:46.731 Malloc4 00:20:46.731 Malloc5 00:20:46.731 Malloc6 00:20:46.731 Malloc7 00:20:46.731 Malloc8 00:20:46.731 Malloc9 00:20:46.731 Malloc10 00:20:46.990 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.990 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:46.990 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:46.990 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:46.990 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1247254 00:20:46.990 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1247254 /var/tmp/bdevperf.sock 00:20:46.990 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 1247254 ']' 00:20:46.990 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:46.990 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:46.990 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:46.990 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:46.990 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:46.990 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:46.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:46.990 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:46.990 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:46.990 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:46.990 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:46.990 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:46.990 { 00:20:46.990 "params": { 00:20:46.990 "name": "Nvme$subsystem", 00:20:46.990 "trtype": "$TEST_TRANSPORT", 00:20:46.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.990 "adrfam": "ipv4", 00:20:46.990 "trsvcid": "$NVMF_PORT", 00:20:46.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.990 "hdgst": ${hdgst:-false}, 00:20:46.990 "ddgst": ${ddgst:-false} 00:20:46.990 }, 00:20:46.990 "method": "bdev_nvme_attach_controller" 00:20:46.990 } 00:20:46.990 EOF 00:20:46.990 )") 00:20:46.990 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:46.990 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:46.990 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:46.990 { 00:20:46.990 "params": { 00:20:46.990 "name": "Nvme$subsystem", 00:20:46.990 "trtype": "$TEST_TRANSPORT", 00:20:46.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.990 "adrfam": "ipv4", 00:20:46.990 "trsvcid": "$NVMF_PORT", 00:20:46.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.990 "hdgst": ${hdgst:-false}, 00:20:46.990 "ddgst": ${ddgst:-false} 00:20:46.990 }, 00:20:46.990 "method": "bdev_nvme_attach_controller" 00:20:46.990 } 00:20:46.990 EOF 00:20:46.990 )") 00:20:46.990 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:46.990 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:46.990 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:46.990 { 00:20:46.990 "params": { 00:20:46.990 "name": "Nvme$subsystem", 00:20:46.990 "trtype": "$TEST_TRANSPORT", 00:20:46.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.990 "adrfam": "ipv4", 00:20:46.990 "trsvcid": "$NVMF_PORT", 00:20:46.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.990 "hdgst": ${hdgst:-false}, 00:20:46.990 "ddgst": ${ddgst:-false} 00:20:46.990 }, 00:20:46.990 "method": "bdev_nvme_attach_controller" 00:20:46.990 } 00:20:46.990 EOF 00:20:46.990 )") 00:20:46.990 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:46.990 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:46.990 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:46.990 { 00:20:46.990 "params": { 00:20:46.990 "name": "Nvme$subsystem", 00:20:46.991 "trtype": "$TEST_TRANSPORT", 00:20:46.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.991 "adrfam": "ipv4", 00:20:46.991 "trsvcid": "$NVMF_PORT", 00:20:46.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.991 "hdgst": ${hdgst:-false}, 00:20:46.991 "ddgst": ${ddgst:-false} 00:20:46.991 }, 00:20:46.991 "method": "bdev_nvme_attach_controller" 00:20:46.991 } 00:20:46.991 EOF 00:20:46.991 )") 00:20:46.991 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:46.991 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:46.991 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:46.991 { 00:20:46.991 "params": { 00:20:46.991 "name": "Nvme$subsystem", 00:20:46.991 "trtype": "$TEST_TRANSPORT", 00:20:46.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.991 "adrfam": "ipv4", 00:20:46.991 "trsvcid": "$NVMF_PORT", 00:20:46.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.991 "hdgst": ${hdgst:-false}, 00:20:46.991 "ddgst": ${ddgst:-false} 00:20:46.991 }, 00:20:46.991 "method": "bdev_nvme_attach_controller" 00:20:46.991 } 00:20:46.991 EOF 00:20:46.991 )") 00:20:46.991 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:46.991 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:46.991 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:46.991 { 00:20:46.991 "params": { 00:20:46.991 "name": "Nvme$subsystem", 00:20:46.991 "trtype": "$TEST_TRANSPORT", 00:20:46.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.991 "adrfam": "ipv4", 00:20:46.991 "trsvcid": "$NVMF_PORT", 00:20:46.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.991 "hdgst": ${hdgst:-false}, 00:20:46.991 "ddgst": ${ddgst:-false} 00:20:46.991 }, 00:20:46.991 "method": "bdev_nvme_attach_controller" 00:20:46.991 } 00:20:46.991 EOF 00:20:46.991 )") 00:20:46.991 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:46.991 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:46.991 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:46.991 { 00:20:46.991 "params": { 00:20:46.991 "name": "Nvme$subsystem", 00:20:46.991 "trtype": "$TEST_TRANSPORT", 00:20:46.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.991 "adrfam": "ipv4", 00:20:46.991 "trsvcid": "$NVMF_PORT", 00:20:46.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.991 "hdgst": ${hdgst:-false}, 00:20:46.991 "ddgst": ${ddgst:-false} 00:20:46.991 }, 00:20:46.991 "method": "bdev_nvme_attach_controller" 00:20:46.991 } 00:20:46.991 EOF 00:20:46.991 )") 00:20:46.991 [2024-11-20 07:16:51.360917] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:20:46.991 [2024-11-20 07:16:51.360971] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:46.991 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:46.991 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:46.991 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:46.991 { 00:20:46.991 "params": { 00:20:46.991 "name": "Nvme$subsystem", 00:20:46.991 "trtype": "$TEST_TRANSPORT", 00:20:46.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.991 "adrfam": "ipv4", 00:20:46.991 "trsvcid": "$NVMF_PORT", 00:20:46.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.991 "hdgst": ${hdgst:-false}, 00:20:46.991 "ddgst": ${ddgst:-false} 00:20:46.991 }, 00:20:46.991 "method": "bdev_nvme_attach_controller" 00:20:46.991 } 00:20:46.991 EOF 00:20:46.991 )") 00:20:46.991 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:46.991 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:46.991 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:46.991 { 00:20:46.991 "params": { 00:20:46.991 "name": "Nvme$subsystem", 00:20:46.991 "trtype": "$TEST_TRANSPORT", 00:20:46.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.991 "adrfam": "ipv4", 00:20:46.991 "trsvcid": "$NVMF_PORT", 00:20:46.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.991 "hdgst": ${hdgst:-false}, 00:20:46.991 "ddgst": ${ddgst:-false} 00:20:46.991 }, 00:20:46.991 "method": "bdev_nvme_attach_controller" 00:20:46.991 } 00:20:46.991 EOF 00:20:46.991 )") 00:20:46.991 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:46.991 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:46.991 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:46.991 { 00:20:46.991 "params": { 00:20:46.991 "name": "Nvme$subsystem", 00:20:46.991 "trtype": "$TEST_TRANSPORT", 00:20:46.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:46.991 "adrfam": "ipv4", 00:20:46.991 "trsvcid": "$NVMF_PORT", 00:20:46.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:46.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:46.991 "hdgst": ${hdgst:-false}, 00:20:46.991 "ddgst": ${ddgst:-false} 00:20:46.991 }, 00:20:46.991 "method": "bdev_nvme_attach_controller" 00:20:46.991 } 00:20:46.991 EOF 00:20:46.991 )") 00:20:46.991 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:46.991 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:46.991 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:46.991 07:16:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:46.991 "params": { 00:20:46.991 "name": "Nvme1", 00:20:46.991 "trtype": "tcp", 00:20:46.991 "traddr": "10.0.0.2", 00:20:46.991 "adrfam": "ipv4", 00:20:46.991 "trsvcid": "4420", 00:20:46.991 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.991 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:46.991 "hdgst": false, 00:20:46.991 "ddgst": false 00:20:46.991 }, 00:20:46.991 "method": "bdev_nvme_attach_controller" 00:20:46.991 },{ 00:20:46.991 "params": { 00:20:46.991 "name": "Nvme2", 00:20:46.991 "trtype": "tcp", 00:20:46.991 "traddr": "10.0.0.2", 00:20:46.991 "adrfam": "ipv4", 00:20:46.991 "trsvcid": "4420", 00:20:46.991 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:46.991 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:46.991 "hdgst": false, 00:20:46.991 "ddgst": false 00:20:46.991 }, 00:20:46.991 "method": "bdev_nvme_attach_controller" 00:20:46.991 },{ 00:20:46.991 "params": { 00:20:46.991 "name": "Nvme3", 00:20:46.991 "trtype": "tcp", 00:20:46.991 "traddr": "10.0.0.2", 00:20:46.991 "adrfam": "ipv4", 00:20:46.991 "trsvcid": "4420", 00:20:46.991 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:46.991 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:46.991 "hdgst": false, 00:20:46.991 "ddgst": false 00:20:46.991 }, 00:20:46.991 "method": "bdev_nvme_attach_controller" 00:20:46.991 },{ 00:20:46.991 "params": { 00:20:46.991 "name": "Nvme4", 00:20:46.991 "trtype": "tcp", 00:20:46.991 "traddr": "10.0.0.2", 00:20:46.991 "adrfam": "ipv4", 00:20:46.991 "trsvcid": "4420", 00:20:46.991 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:46.991 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:46.991 "hdgst": false, 00:20:46.991 "ddgst": false 00:20:46.991 }, 00:20:46.991 "method": "bdev_nvme_attach_controller" 00:20:46.991 },{ 00:20:46.992 "params": { 00:20:46.992 "name": "Nvme5", 00:20:46.992 "trtype": "tcp", 00:20:46.992 "traddr": "10.0.0.2", 00:20:46.992 "adrfam": "ipv4", 00:20:46.992 "trsvcid": "4420", 00:20:46.992 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:46.992 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:46.992 "hdgst": false, 00:20:46.992 "ddgst": false 00:20:46.992 }, 00:20:46.992 "method": "bdev_nvme_attach_controller" 00:20:46.992 },{ 00:20:46.992 "params": { 00:20:46.992 "name": "Nvme6", 00:20:46.992 "trtype": "tcp", 00:20:46.992 "traddr": "10.0.0.2", 00:20:46.992 "adrfam": "ipv4", 00:20:46.992 "trsvcid": "4420", 00:20:46.992 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:46.992 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:46.992 "hdgst": false, 00:20:46.992 "ddgst": false 00:20:46.992 }, 00:20:46.992 "method": "bdev_nvme_attach_controller" 00:20:46.992 },{ 00:20:46.992 "params": { 00:20:46.992 "name": "Nvme7", 00:20:46.992 "trtype": "tcp", 00:20:46.992 "traddr": "10.0.0.2", 00:20:46.992 "adrfam": "ipv4", 00:20:46.992 "trsvcid": "4420", 00:20:46.992 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:46.992 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:46.992 "hdgst": false, 00:20:46.992 "ddgst": false 00:20:46.992 }, 00:20:46.992 "method": "bdev_nvme_attach_controller" 00:20:46.992 },{ 00:20:46.992 "params": { 00:20:46.992 "name": "Nvme8", 00:20:46.992 "trtype": "tcp", 00:20:46.992 "traddr": "10.0.0.2", 00:20:46.992 "adrfam": "ipv4", 00:20:46.992 "trsvcid": "4420", 00:20:46.992 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:46.992 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:46.992 "hdgst": false, 00:20:46.992 "ddgst": false 00:20:46.992 }, 00:20:46.992 "method": "bdev_nvme_attach_controller" 00:20:46.992 },{ 00:20:46.992 "params": { 00:20:46.992 "name": "Nvme9", 00:20:46.992 "trtype": "tcp", 00:20:46.992 "traddr": "10.0.0.2", 00:20:46.992 "adrfam": "ipv4", 00:20:46.992 "trsvcid": "4420", 00:20:46.992 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:46.992 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:46.992 "hdgst": false, 00:20:46.992 "ddgst": false 00:20:46.992 }, 00:20:46.992 "method": "bdev_nvme_attach_controller" 00:20:46.992 },{ 00:20:46.992 "params": { 00:20:46.992 "name": "Nvme10", 00:20:46.992 "trtype": "tcp", 00:20:46.992 "traddr": "10.0.0.2", 00:20:46.992 "adrfam": "ipv4", 00:20:46.992 "trsvcid": "4420", 00:20:46.992 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:46.992 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:46.992 "hdgst": false, 00:20:46.992 "ddgst": false 00:20:46.992 }, 00:20:46.992 "method": "bdev_nvme_attach_controller" 00:20:46.992 }' 00:20:46.992 [2024-11-20 07:16:51.436860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.992 [2024-11-20 07:16:51.478467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.893 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:48.893 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:20:48.893 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:48.893 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.893 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:48.893 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.893 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1247254 00:20:48.893 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:20:48.893 07:16:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:20:49.827 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1247254 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:49.827 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1247015 00:20:49.827 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:49.827 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:49.827 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:49.827 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:49.827 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:49.827 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:49.827 { 00:20:49.827 "params": { 00:20:49.827 "name": "Nvme$subsystem", 00:20:49.827 "trtype": "$TEST_TRANSPORT", 00:20:49.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.827 "adrfam": "ipv4", 00:20:49.827 "trsvcid": "$NVMF_PORT", 00:20:49.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.827 "hdgst": ${hdgst:-false}, 00:20:49.827 "ddgst": ${ddgst:-false} 00:20:49.827 }, 00:20:49.827 "method": "bdev_nvme_attach_controller" 00:20:49.827 } 00:20:49.827 EOF 00:20:49.827 )") 00:20:49.827 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:49.827 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:49.827 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:49.827 { 00:20:49.827 "params": { 00:20:49.827 "name": "Nvme$subsystem", 00:20:49.827 "trtype": "$TEST_TRANSPORT", 00:20:49.827 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.827 "adrfam": "ipv4", 00:20:49.827 "trsvcid": "$NVMF_PORT", 00:20:49.827 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.827 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.827 "hdgst": ${hdgst:-false}, 00:20:49.827 "ddgst": ${ddgst:-false} 00:20:49.827 }, 00:20:49.827 "method": "bdev_nvme_attach_controller" 00:20:49.827 } 00:20:49.827 EOF 00:20:49.827 )") 00:20:49.828 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:49.828 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:49.828 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:49.828 { 00:20:49.828 "params": { 00:20:49.828 "name": "Nvme$subsystem", 00:20:49.828 "trtype": "$TEST_TRANSPORT", 00:20:49.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.828 "adrfam": "ipv4", 00:20:49.828 "trsvcid": "$NVMF_PORT", 00:20:49.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.828 "hdgst": ${hdgst:-false}, 00:20:49.828 "ddgst": ${ddgst:-false} 00:20:49.828 }, 00:20:49.828 "method": "bdev_nvme_attach_controller" 00:20:49.828 } 00:20:49.828 EOF 00:20:49.828 )") 00:20:49.828 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:49.828 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:49.828 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:49.828 { 00:20:49.828 "params": { 00:20:49.828 "name": "Nvme$subsystem", 00:20:49.828 "trtype": "$TEST_TRANSPORT", 00:20:49.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.828 "adrfam": "ipv4", 00:20:49.828 "trsvcid": "$NVMF_PORT", 00:20:49.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.828 "hdgst": ${hdgst:-false}, 00:20:49.828 "ddgst": ${ddgst:-false} 00:20:49.828 }, 00:20:49.828 "method": "bdev_nvme_attach_controller" 00:20:49.828 } 00:20:49.828 EOF 00:20:49.828 )") 00:20:49.828 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:49.828 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:49.828 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:49.828 { 00:20:49.828 "params": { 00:20:49.828 "name": "Nvme$subsystem", 00:20:49.828 "trtype": "$TEST_TRANSPORT", 00:20:49.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.828 "adrfam": "ipv4", 00:20:49.828 "trsvcid": "$NVMF_PORT", 00:20:49.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.828 "hdgst": ${hdgst:-false}, 00:20:49.828 "ddgst": ${ddgst:-false} 00:20:49.828 }, 00:20:49.828 "method": "bdev_nvme_attach_controller" 00:20:49.828 } 00:20:49.828 EOF 00:20:49.828 )") 00:20:49.828 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:49.828 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:49.828 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:49.828 { 00:20:49.828 "params": { 00:20:49.828 "name": "Nvme$subsystem", 00:20:49.828 "trtype": "$TEST_TRANSPORT", 00:20:49.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.828 "adrfam": "ipv4", 00:20:49.828 "trsvcid": "$NVMF_PORT", 00:20:49.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.828 "hdgst": ${hdgst:-false}, 00:20:49.828 "ddgst": ${ddgst:-false} 00:20:49.828 }, 00:20:49.828 "method": "bdev_nvme_attach_controller" 00:20:49.828 } 00:20:49.828 EOF 00:20:49.828 )") 00:20:49.828 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:49.828 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:49.828 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:49.828 { 00:20:49.828 "params": { 00:20:49.828 "name": "Nvme$subsystem", 00:20:49.828 "trtype": "$TEST_TRANSPORT", 00:20:49.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.828 "adrfam": "ipv4", 00:20:49.828 "trsvcid": "$NVMF_PORT", 00:20:49.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.828 "hdgst": ${hdgst:-false}, 00:20:49.828 "ddgst": ${ddgst:-false} 00:20:49.828 }, 00:20:49.828 "method": "bdev_nvme_attach_controller" 00:20:49.828 } 00:20:49.828 EOF 00:20:49.828 )") 00:20:49.828 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:49.828 [2024-11-20 07:16:54.293971] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:20:49.828 [2024-11-20 07:16:54.294021] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1247744 ] 00:20:49.828 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:49.828 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:49.828 { 00:20:49.828 "params": { 00:20:49.828 "name": "Nvme$subsystem", 00:20:49.828 "trtype": "$TEST_TRANSPORT", 00:20:49.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.828 "adrfam": "ipv4", 00:20:49.828 "trsvcid": "$NVMF_PORT", 00:20:49.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.828 "hdgst": ${hdgst:-false}, 00:20:49.828 "ddgst": ${ddgst:-false} 00:20:49.828 }, 00:20:49.828 "method": "bdev_nvme_attach_controller" 00:20:49.828 } 00:20:49.828 EOF 00:20:49.828 )") 00:20:49.828 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:49.828 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:49.828 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:49.828 { 00:20:49.828 "params": { 00:20:49.828 "name": "Nvme$subsystem", 00:20:49.828 "trtype": "$TEST_TRANSPORT", 00:20:49.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.828 "adrfam": "ipv4", 00:20:49.828 "trsvcid": "$NVMF_PORT", 00:20:49.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.828 "hdgst": ${hdgst:-false}, 00:20:49.828 "ddgst": ${ddgst:-false} 00:20:49.828 }, 00:20:49.828 "method": "bdev_nvme_attach_controller" 00:20:49.828 } 00:20:49.828 EOF 00:20:49.828 )") 00:20:49.828 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:49.828 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:49.828 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:49.828 { 00:20:49.828 "params": { 00:20:49.828 "name": "Nvme$subsystem", 00:20:49.828 "trtype": "$TEST_TRANSPORT", 00:20:49.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.828 "adrfam": "ipv4", 00:20:49.828 "trsvcid": "$NVMF_PORT", 00:20:49.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.828 "hdgst": ${hdgst:-false}, 00:20:49.828 "ddgst": ${ddgst:-false} 00:20:49.828 }, 00:20:49.828 "method": "bdev_nvme_attach_controller" 00:20:49.828 } 00:20:49.828 EOF 00:20:49.828 )") 00:20:49.828 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:49.828 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:49.828 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:49.828 07:16:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:49.828 "params": { 00:20:49.828 "name": "Nvme1", 00:20:49.828 "trtype": "tcp", 00:20:49.828 "traddr": "10.0.0.2", 00:20:49.828 "adrfam": "ipv4", 00:20:49.828 "trsvcid": "4420", 00:20:49.828 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:49.828 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:49.828 "hdgst": false, 00:20:49.828 "ddgst": false 00:20:49.828 }, 00:20:49.828 "method": "bdev_nvme_attach_controller" 00:20:49.828 },{ 00:20:49.829 "params": { 00:20:49.829 "name": "Nvme2", 00:20:49.829 "trtype": "tcp", 00:20:49.829 "traddr": "10.0.0.2", 00:20:49.829 "adrfam": "ipv4", 00:20:49.829 "trsvcid": "4420", 00:20:49.829 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:49.829 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:49.829 "hdgst": false, 00:20:49.829 "ddgst": false 00:20:49.829 }, 00:20:49.829 "method": "bdev_nvme_attach_controller" 00:20:49.829 },{ 00:20:49.829 "params": { 00:20:49.829 "name": "Nvme3", 00:20:49.829 "trtype": "tcp", 00:20:49.829 "traddr": "10.0.0.2", 00:20:49.829 "adrfam": "ipv4", 00:20:49.829 "trsvcid": "4420", 00:20:49.829 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:49.829 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:49.829 "hdgst": false, 00:20:49.829 "ddgst": false 00:20:49.829 }, 00:20:49.829 "method": "bdev_nvme_attach_controller" 00:20:49.829 },{ 00:20:49.829 "params": { 00:20:49.829 "name": "Nvme4", 00:20:49.829 "trtype": "tcp", 00:20:49.829 "traddr": "10.0.0.2", 00:20:49.829 "adrfam": "ipv4", 00:20:49.829 "trsvcid": "4420", 00:20:49.829 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:49.829 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:49.829 "hdgst": false, 00:20:49.829 "ddgst": false 00:20:49.829 }, 00:20:49.829 "method": "bdev_nvme_attach_controller" 00:20:49.829 },{ 00:20:49.829 "params": { 00:20:49.829 "name": "Nvme5", 00:20:49.829 "trtype": "tcp", 00:20:49.829 "traddr": "10.0.0.2", 00:20:49.829 "adrfam": "ipv4", 00:20:49.829 "trsvcid": "4420", 00:20:49.829 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:49.829 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:49.829 "hdgst": false, 00:20:49.829 "ddgst": false 00:20:49.829 }, 00:20:49.829 "method": "bdev_nvme_attach_controller" 00:20:49.829 },{ 00:20:49.829 "params": { 00:20:49.829 "name": "Nvme6", 00:20:49.829 "trtype": "tcp", 00:20:49.829 "traddr": "10.0.0.2", 00:20:49.829 "adrfam": "ipv4", 00:20:49.829 "trsvcid": "4420", 00:20:49.829 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:49.829 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:49.829 "hdgst": false, 00:20:49.829 "ddgst": false 00:20:49.829 }, 00:20:49.829 "method": "bdev_nvme_attach_controller" 00:20:49.829 },{ 00:20:49.829 "params": { 00:20:49.829 "name": "Nvme7", 00:20:49.829 "trtype": "tcp", 00:20:49.829 "traddr": "10.0.0.2", 00:20:49.829 "adrfam": "ipv4", 00:20:49.829 "trsvcid": "4420", 00:20:49.829 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:49.829 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:49.829 "hdgst": false, 00:20:49.829 "ddgst": false 00:20:49.829 }, 00:20:49.829 "method": "bdev_nvme_attach_controller" 00:20:49.829 },{ 00:20:49.829 "params": { 00:20:49.829 "name": "Nvme8", 00:20:49.829 "trtype": "tcp", 00:20:49.829 "traddr": "10.0.0.2", 00:20:49.829 "adrfam": "ipv4", 00:20:49.829 "trsvcid": "4420", 00:20:49.829 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:49.829 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:49.829 "hdgst": false, 00:20:49.829 "ddgst": false 00:20:49.829 }, 00:20:49.829 "method": "bdev_nvme_attach_controller" 00:20:49.829 },{ 00:20:49.829 "params": { 00:20:49.829 "name": "Nvme9", 00:20:49.829 "trtype": "tcp", 00:20:49.829 "traddr": "10.0.0.2", 00:20:49.829 "adrfam": "ipv4", 00:20:49.829 "trsvcid": "4420", 00:20:49.829 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:49.829 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:49.829 "hdgst": false, 00:20:49.829 "ddgst": false 00:20:49.829 }, 00:20:49.829 "method": "bdev_nvme_attach_controller" 00:20:49.829 },{ 00:20:49.829 "params": { 00:20:49.829 "name": "Nvme10", 00:20:49.829 "trtype": "tcp", 00:20:49.829 "traddr": "10.0.0.2", 00:20:49.829 "adrfam": "ipv4", 00:20:49.829 "trsvcid": "4420", 00:20:49.829 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:49.829 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:49.829 "hdgst": false, 00:20:49.829 "ddgst": false 00:20:49.829 }, 00:20:49.829 "method": "bdev_nvme_attach_controller" 00:20:49.829 }' 00:20:49.829 [2024-11-20 07:16:54.372627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.087 [2024-11-20 07:16:54.414501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.459 Running I/O for 1 seconds... 00:20:52.419 2200.00 IOPS, 137.50 MiB/s 00:20:52.419 Latency(us) 00:20:52.419 [2024-11-20T06:16:56.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.419 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.419 Verification LBA range: start 0x0 length 0x400 00:20:52.419 Nvme1n1 : 1.07 239.64 14.98 0.00 0.00 264432.19 19831.76 248011.02 00:20:52.419 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.419 Verification LBA range: start 0x0 length 0x400 00:20:52.419 Nvme2n1 : 1.08 236.41 14.78 0.00 0.00 264094.27 17552.25 227039.50 00:20:52.419 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.419 Verification LBA range: start 0x0 length 0x400 00:20:52.419 Nvme3n1 : 1.15 278.21 17.39 0.00 0.00 221220.69 15956.59 217921.45 00:20:52.419 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.419 Verification LBA range: start 0x0 length 0x400 00:20:52.419 Nvme4n1 : 1.11 293.52 18.34 0.00 0.00 205440.63 5527.82 224304.08 00:20:52.419 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.419 Verification LBA range: start 0x0 length 0x400 00:20:52.419 Nvme5n1 : 1.14 280.31 17.52 0.00 0.00 213431.52 16982.37 223392.28 00:20:52.419 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.419 Verification LBA range: start 0x0 length 0x400 00:20:52.419 Nvme6n1 : 1.15 282.46 17.65 0.00 0.00 208532.70 28493.91 182361.04 00:20:52.419 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.419 Verification LBA range: start 0x0 length 0x400 00:20:52.419 Nvme7n1 : 1.14 283.62 17.73 0.00 0.00 203599.83 7038.00 217921.45 00:20:52.419 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.419 Verification LBA range: start 0x0 length 0x400 00:20:52.419 Nvme8n1 : 1.15 280.71 17.54 0.00 0.00 203756.12 1766.62 222480.47 00:20:52.419 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.419 Verification LBA range: start 0x0 length 0x400 00:20:52.419 Nvme9n1 : 1.16 276.03 17.25 0.00 0.00 204351.22 14075.99 220656.86 00:20:52.419 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:52.419 Verification LBA range: start 0x0 length 0x400 00:20:52.419 Nvme10n1 : 1.16 275.19 17.20 0.00 0.00 201964.68 11397.57 235245.75 00:20:52.419 [2024-11-20T06:16:56.975Z] =================================================================================================================== 00:20:52.419 [2024-11-20T06:16:56.975Z] Total : 2726.09 170.38 0.00 0.00 217130.79 1766.62 248011.02 00:20:52.678 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:20:52.678 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:52.678 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:52.678 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:52.678 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:52.678 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:52.678 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:20:52.678 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:52.678 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:20:52.678 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:52.678 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:52.678 rmmod nvme_tcp 00:20:52.678 rmmod nvme_fabrics 00:20:52.678 rmmod nvme_keyring 00:20:52.678 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:52.678 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:20:52.678 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:20:52.678 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 1247015 ']' 00:20:52.678 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 1247015 00:20:52.678 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' -z 1247015 ']' 00:20:52.678 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # kill -0 1247015 00:20:52.678 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # uname 00:20:52.678 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:52.678 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1247015 00:20:52.937 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:52.937 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:52.937 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1247015' 00:20:52.937 killing process with pid 1247015 00:20:52.937 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # kill 1247015 00:20:52.937 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@976 -- # wait 1247015 00:20:53.196 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:53.196 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:53.196 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:53.196 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:20:53.196 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:20:53.196 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:53.196 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:20:53.196 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:53.196 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:53.196 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.196 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:53.196 07:16:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.732 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:55.732 00:20:55.732 real 0m15.202s 00:20:55.732 user 0m33.745s 00:20:55.732 sys 0m5.834s 00:20:55.732 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:55.732 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:55.732 ************************************ 00:20:55.732 END TEST nvmf_shutdown_tc1 00:20:55.732 ************************************ 00:20:55.732 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:55.732 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:55.732 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:55.732 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:55.732 ************************************ 00:20:55.732 START TEST nvmf_shutdown_tc2 00:20:55.732 ************************************ 00:20:55.732 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc2 00:20:55.732 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:20:55.732 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:55.732 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:55.732 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:55.732 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:55.732 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:55.732 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:55.732 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.732 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:55.732 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.732 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:55.732 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:55.732 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:55.732 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:55.732 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:55.732 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:55.732 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:55.732 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:55.732 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:55.732 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:55.733 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:55.733 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:55.733 Found net devices under 0000:86:00.0: cvl_0_0 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:55.733 Found net devices under 0000:86:00.1: cvl_0_1 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:55.733 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:55.734 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:55.734 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:55.734 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:55.734 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:55.734 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:55.734 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:55.734 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:55.734 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:55.734 07:16:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:55.734 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:55.734 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:55.734 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:55.734 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:55.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:55.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.459 ms 00:20:55.734 00:20:55.734 --- 10.0.0.2 ping statistics --- 00:20:55.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.734 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:20:55.734 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:55.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:55.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:20:55.734 00:20:55.734 --- 10.0.0.1 ping statistics --- 00:20:55.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.734 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:20:55.734 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:55.734 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:20:55.734 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:55.734 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:55.734 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:55.734 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:55.734 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:55.734 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:55.734 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:55.734 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:55.734 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:55.734 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:55.734 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:55.734 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1248782 00:20:55.734 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:55.734 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1248782 00:20:55.734 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 1248782 ']' 00:20:55.734 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.734 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:55.734 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.734 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:55.734 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:55.734 [2024-11-20 07:17:00.123929] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:20:55.734 [2024-11-20 07:17:00.123982] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:55.734 [2024-11-20 07:17:00.206768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:55.734 [2024-11-20 07:17:00.249589] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:55.734 [2024-11-20 07:17:00.249630] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:55.734 [2024-11-20 07:17:00.249637] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:55.734 [2024-11-20 07:17:00.249644] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:55.734 [2024-11-20 07:17:00.249649] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:55.734 [2024-11-20 07:17:00.251197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:55.734 [2024-11-20 07:17:00.251214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:55.734 [2024-11-20 07:17:00.255072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:55.734 [2024-11-20 07:17:00.255072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.669 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:56.669 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:20:56.669 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:56.669 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:56.669 07:17:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:56.669 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:56.669 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:56.669 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.669 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:56.669 [2024-11-20 07:17:01.020085] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:56.669 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.669 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:56.669 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:56.669 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:56.669 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:56.669 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:56.669 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:56.669 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:56.669 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:56.669 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:56.669 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:56.669 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:56.669 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:56.669 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:56.669 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:56.669 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:56.669 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:56.669 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:56.669 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:56.669 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:56.669 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:56.669 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:56.670 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:56.670 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:56.670 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:56.670 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:56.670 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:56.670 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.670 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:56.670 Malloc1 00:20:56.670 [2024-11-20 07:17:01.123680] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:56.670 Malloc2 00:20:56.670 Malloc3 00:20:56.928 Malloc4 00:20:56.928 Malloc5 00:20:56.928 Malloc6 00:20:56.928 Malloc7 00:20:56.928 Malloc8 00:20:56.928 Malloc9 00:20:57.188 Malloc10 00:20:57.188 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.188 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:57.188 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:57.188 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:57.188 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1249068 00:20:57.188 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1249068 /var/tmp/bdevperf.sock 00:20:57.188 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 1249068 ']' 00:20:57.188 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:57.188 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:57.188 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:57.188 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:57.188 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:57.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:57.188 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:20:57.188 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:57.188 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:20:57.188 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:57.188 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:57.188 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:57.188 { 00:20:57.188 "params": { 00:20:57.188 "name": "Nvme$subsystem", 00:20:57.188 "trtype": "$TEST_TRANSPORT", 00:20:57.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:57.188 "adrfam": "ipv4", 00:20:57.188 "trsvcid": "$NVMF_PORT", 00:20:57.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:57.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:57.188 "hdgst": ${hdgst:-false}, 00:20:57.188 "ddgst": ${ddgst:-false} 00:20:57.188 }, 00:20:57.188 "method": "bdev_nvme_attach_controller" 00:20:57.188 } 00:20:57.188 EOF 00:20:57.188 )") 00:20:57.188 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:57.188 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:57.188 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:57.188 { 00:20:57.188 "params": { 00:20:57.188 "name": "Nvme$subsystem", 00:20:57.188 "trtype": "$TEST_TRANSPORT", 00:20:57.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:57.188 "adrfam": "ipv4", 00:20:57.188 "trsvcid": "$NVMF_PORT", 00:20:57.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:57.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:57.188 "hdgst": ${hdgst:-false}, 00:20:57.188 "ddgst": ${ddgst:-false} 00:20:57.188 }, 00:20:57.188 "method": "bdev_nvme_attach_controller" 00:20:57.188 } 00:20:57.188 EOF 00:20:57.188 )") 00:20:57.188 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:57.188 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:57.188 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:57.188 { 00:20:57.188 "params": { 00:20:57.188 "name": "Nvme$subsystem", 00:20:57.188 "trtype": "$TEST_TRANSPORT", 00:20:57.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:57.188 "adrfam": "ipv4", 00:20:57.188 "trsvcid": "$NVMF_PORT", 00:20:57.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:57.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:57.188 "hdgst": ${hdgst:-false}, 00:20:57.188 "ddgst": ${ddgst:-false} 00:20:57.188 }, 00:20:57.188 "method": "bdev_nvme_attach_controller" 00:20:57.188 } 00:20:57.188 EOF 00:20:57.188 )") 00:20:57.188 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:57.188 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:57.188 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:57.188 { 00:20:57.188 "params": { 00:20:57.188 "name": "Nvme$subsystem", 00:20:57.188 "trtype": "$TEST_TRANSPORT", 00:20:57.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:57.188 "adrfam": "ipv4", 00:20:57.188 "trsvcid": "$NVMF_PORT", 00:20:57.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:57.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:57.188 "hdgst": ${hdgst:-false}, 00:20:57.188 "ddgst": ${ddgst:-false} 00:20:57.188 }, 00:20:57.188 "method": "bdev_nvme_attach_controller" 00:20:57.188 } 00:20:57.188 EOF 00:20:57.188 )") 00:20:57.188 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:57.188 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:57.188 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:57.188 { 00:20:57.188 "params": { 00:20:57.188 "name": "Nvme$subsystem", 00:20:57.188 "trtype": "$TEST_TRANSPORT", 00:20:57.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:57.188 "adrfam": "ipv4", 00:20:57.188 "trsvcid": "$NVMF_PORT", 00:20:57.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:57.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:57.188 "hdgst": ${hdgst:-false}, 00:20:57.188 "ddgst": ${ddgst:-false} 00:20:57.188 }, 00:20:57.188 "method": "bdev_nvme_attach_controller" 00:20:57.188 } 00:20:57.188 EOF 00:20:57.188 )") 00:20:57.188 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:57.188 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:57.188 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:57.188 { 00:20:57.188 "params": { 00:20:57.188 "name": "Nvme$subsystem", 00:20:57.188 "trtype": "$TEST_TRANSPORT", 00:20:57.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:57.188 "adrfam": "ipv4", 00:20:57.188 "trsvcid": "$NVMF_PORT", 00:20:57.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:57.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:57.188 "hdgst": ${hdgst:-false}, 00:20:57.188 "ddgst": ${ddgst:-false} 00:20:57.188 }, 00:20:57.189 "method": "bdev_nvme_attach_controller" 00:20:57.189 } 00:20:57.189 EOF 00:20:57.189 )") 00:20:57.189 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:57.189 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:57.189 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:57.189 { 00:20:57.189 "params": { 00:20:57.189 "name": "Nvme$subsystem", 00:20:57.189 "trtype": "$TEST_TRANSPORT", 00:20:57.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:57.189 "adrfam": "ipv4", 00:20:57.189 "trsvcid": "$NVMF_PORT", 00:20:57.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:57.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:57.189 "hdgst": ${hdgst:-false}, 00:20:57.189 "ddgst": ${ddgst:-false} 00:20:57.189 }, 00:20:57.189 "method": "bdev_nvme_attach_controller" 00:20:57.189 } 00:20:57.189 EOF 00:20:57.189 )") 00:20:57.189 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:57.189 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:57.189 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:57.189 { 00:20:57.189 "params": { 00:20:57.189 "name": "Nvme$subsystem", 00:20:57.189 "trtype": "$TEST_TRANSPORT", 00:20:57.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:57.189 "adrfam": "ipv4", 00:20:57.189 "trsvcid": "$NVMF_PORT", 00:20:57.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:57.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:57.189 "hdgst": ${hdgst:-false}, 00:20:57.189 "ddgst": ${ddgst:-false} 00:20:57.189 }, 00:20:57.189 "method": "bdev_nvme_attach_controller" 00:20:57.189 } 00:20:57.189 EOF 00:20:57.189 )") 00:20:57.189 [2024-11-20 07:17:01.596936] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:20:57.189 [2024-11-20 07:17:01.596992] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1249068 ] 00:20:57.189 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:57.189 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:57.189 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:57.189 { 00:20:57.189 "params": { 00:20:57.189 "name": "Nvme$subsystem", 00:20:57.189 "trtype": "$TEST_TRANSPORT", 00:20:57.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:57.189 "adrfam": "ipv4", 00:20:57.189 "trsvcid": "$NVMF_PORT", 00:20:57.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:57.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:57.189 "hdgst": ${hdgst:-false}, 00:20:57.189 "ddgst": ${ddgst:-false} 00:20:57.189 }, 00:20:57.189 "method": "bdev_nvme_attach_controller" 00:20:57.189 } 00:20:57.189 EOF 00:20:57.189 )") 00:20:57.189 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:57.189 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:57.189 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:57.189 { 00:20:57.189 "params": { 00:20:57.189 "name": "Nvme$subsystem", 00:20:57.189 "trtype": "$TEST_TRANSPORT", 00:20:57.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:57.189 "adrfam": "ipv4", 00:20:57.189 "trsvcid": "$NVMF_PORT", 00:20:57.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:57.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:57.189 "hdgst": ${hdgst:-false}, 00:20:57.189 "ddgst": ${ddgst:-false} 00:20:57.189 }, 00:20:57.189 "method": "bdev_nvme_attach_controller" 00:20:57.189 } 00:20:57.189 EOF 00:20:57.189 )") 00:20:57.189 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:57.189 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:20:57.189 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:20:57.189 07:17:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:57.189 "params": { 00:20:57.189 "name": "Nvme1", 00:20:57.189 "trtype": "tcp", 00:20:57.189 "traddr": "10.0.0.2", 00:20:57.189 "adrfam": "ipv4", 00:20:57.189 "trsvcid": "4420", 00:20:57.189 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:57.189 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:57.189 "hdgst": false, 00:20:57.189 "ddgst": false 00:20:57.189 }, 00:20:57.189 "method": "bdev_nvme_attach_controller" 00:20:57.189 },{ 00:20:57.189 "params": { 00:20:57.189 "name": "Nvme2", 00:20:57.189 "trtype": "tcp", 00:20:57.189 "traddr": "10.0.0.2", 00:20:57.189 "adrfam": "ipv4", 00:20:57.189 "trsvcid": "4420", 00:20:57.189 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:57.189 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:57.189 "hdgst": false, 00:20:57.189 "ddgst": false 00:20:57.189 }, 00:20:57.189 "method": "bdev_nvme_attach_controller" 00:20:57.189 },{ 00:20:57.189 "params": { 00:20:57.189 "name": "Nvme3", 00:20:57.189 "trtype": "tcp", 00:20:57.189 "traddr": "10.0.0.2", 00:20:57.189 "adrfam": "ipv4", 00:20:57.189 "trsvcid": "4420", 00:20:57.189 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:57.189 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:57.189 "hdgst": false, 00:20:57.189 "ddgst": false 00:20:57.189 }, 00:20:57.189 "method": "bdev_nvme_attach_controller" 00:20:57.189 },{ 00:20:57.189 "params": { 00:20:57.189 "name": "Nvme4", 00:20:57.189 "trtype": "tcp", 00:20:57.189 "traddr": "10.0.0.2", 00:20:57.189 "adrfam": "ipv4", 00:20:57.189 "trsvcid": "4420", 00:20:57.189 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:57.189 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:57.189 "hdgst": false, 00:20:57.189 "ddgst": false 00:20:57.189 }, 00:20:57.189 "method": "bdev_nvme_attach_controller" 00:20:57.189 },{ 00:20:57.189 "params": { 00:20:57.189 "name": "Nvme5", 00:20:57.189 "trtype": "tcp", 00:20:57.189 "traddr": "10.0.0.2", 00:20:57.189 "adrfam": "ipv4", 00:20:57.189 "trsvcid": "4420", 00:20:57.189 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:57.189 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:57.189 "hdgst": false, 00:20:57.189 "ddgst": false 00:20:57.189 }, 00:20:57.189 "method": "bdev_nvme_attach_controller" 00:20:57.189 },{ 00:20:57.189 "params": { 00:20:57.189 "name": "Nvme6", 00:20:57.189 "trtype": "tcp", 00:20:57.189 "traddr": "10.0.0.2", 00:20:57.189 "adrfam": "ipv4", 00:20:57.189 "trsvcid": "4420", 00:20:57.189 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:57.189 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:57.189 "hdgst": false, 00:20:57.189 "ddgst": false 00:20:57.189 }, 00:20:57.189 "method": "bdev_nvme_attach_controller" 00:20:57.189 },{ 00:20:57.189 "params": { 00:20:57.189 "name": "Nvme7", 00:20:57.189 "trtype": "tcp", 00:20:57.189 "traddr": "10.0.0.2", 00:20:57.189 "adrfam": "ipv4", 00:20:57.189 "trsvcid": "4420", 00:20:57.189 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:57.189 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:57.189 "hdgst": false, 00:20:57.189 "ddgst": false 00:20:57.189 }, 00:20:57.189 "method": "bdev_nvme_attach_controller" 00:20:57.189 },{ 00:20:57.189 "params": { 00:20:57.189 "name": "Nvme8", 00:20:57.189 "trtype": "tcp", 00:20:57.189 "traddr": "10.0.0.2", 00:20:57.189 "adrfam": "ipv4", 00:20:57.189 "trsvcid": "4420", 00:20:57.189 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:57.189 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:57.189 "hdgst": false, 00:20:57.189 "ddgst": false 00:20:57.189 }, 00:20:57.189 "method": "bdev_nvme_attach_controller" 00:20:57.189 },{ 00:20:57.189 "params": { 00:20:57.189 "name": "Nvme9", 00:20:57.189 "trtype": "tcp", 00:20:57.189 "traddr": "10.0.0.2", 00:20:57.189 "adrfam": "ipv4", 00:20:57.190 "trsvcid": "4420", 00:20:57.190 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:57.190 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:57.190 "hdgst": false, 00:20:57.190 "ddgst": false 00:20:57.190 }, 00:20:57.190 "method": "bdev_nvme_attach_controller" 00:20:57.190 },{ 00:20:57.190 "params": { 00:20:57.190 "name": "Nvme10", 00:20:57.190 "trtype": "tcp", 00:20:57.190 "traddr": "10.0.0.2", 00:20:57.190 "adrfam": "ipv4", 00:20:57.190 "trsvcid": "4420", 00:20:57.190 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:57.190 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:57.190 "hdgst": false, 00:20:57.190 "ddgst": false 00:20:57.190 }, 00:20:57.190 "method": "bdev_nvme_attach_controller" 00:20:57.190 }' 00:20:57.190 [2024-11-20 07:17:01.674003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.190 [2024-11-20 07:17:01.715589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.090 Running I/O for 10 seconds... 00:20:59.090 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:59.090 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:20:59.090 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:59.090 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.090 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:59.090 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.090 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:59.090 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:59.090 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:59.090 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:20:59.090 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:20:59.090 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:59.090 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:59.090 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:59.090 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:59.090 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.090 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:59.090 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.090 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:20:59.090 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:20:59.090 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:59.349 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:59.349 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:59.349 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:59.349 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:59.349 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.349 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:59.608 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.608 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:20:59.608 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:20:59.608 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:20:59.608 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:20:59.608 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:20:59.608 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1249068 00:20:59.608 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 1249068 ']' 00:20:59.608 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 1249068 00:20:59.608 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:20:59.608 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:59.608 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1249068 00:20:59.608 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:59.608 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:59.608 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1249068' 00:20:59.608 killing process with pid 1249068 00:20:59.608 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 1249068 00:20:59.608 07:17:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 1249068 00:20:59.608 Received shutdown signal, test time was about 0.804051 seconds 00:20:59.608 00:20:59.608 Latency(us) 00:20:59.608 [2024-11-20T06:17:04.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.608 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:59.608 Verification LBA range: start 0x0 length 0x400 00:20:59.608 Nvme1n1 : 0.80 318.65 19.92 0.00 0.00 197817.43 14588.88 223392.28 00:20:59.608 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:59.608 Verification LBA range: start 0x0 length 0x400 00:20:59.608 Nvme2n1 : 0.79 248.42 15.53 0.00 0.00 248612.86 1780.87 228863.11 00:20:59.608 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:59.608 Verification LBA range: start 0x0 length 0x400 00:20:59.608 Nvme3n1 : 0.80 320.98 20.06 0.00 0.00 188418.67 17438.27 213362.42 00:20:59.608 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:59.608 Verification LBA range: start 0x0 length 0x400 00:20:59.608 Nvme4n1 : 0.80 319.61 19.98 0.00 0.00 185715.09 15500.69 224304.08 00:20:59.608 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:59.608 Verification LBA range: start 0x0 length 0x400 00:20:59.608 Nvme5n1 : 0.78 246.36 15.40 0.00 0.00 235130.29 15842.62 223392.28 00:20:59.608 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:59.608 Verification LBA range: start 0x0 length 0x400 00:20:59.608 Nvme6n1 : 0.77 254.53 15.91 0.00 0.00 220612.64 2094.30 200597.15 00:20:59.608 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:59.608 Verification LBA range: start 0x0 length 0x400 00:20:59.608 Nvme7n1 : 0.76 251.30 15.71 0.00 0.00 219502.27 33964.74 199685.34 00:20:59.608 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:59.608 Verification LBA range: start 0x0 length 0x400 00:20:59.608 Nvme8n1 : 0.78 255.03 15.94 0.00 0.00 210275.72 5014.93 205156.17 00:20:59.608 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:59.608 Verification LBA range: start 0x0 length 0x400 00:20:59.608 Nvme9n1 : 0.79 241.54 15.10 0.00 0.00 219197.14 18805.98 253481.85 00:20:59.608 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:59.608 Verification LBA range: start 0x0 length 0x400 00:20:59.608 Nvme10n1 : 0.79 242.62 15.16 0.00 0.00 212793.14 29405.72 235245.75 00:20:59.608 [2024-11-20T06:17:04.164Z] =================================================================================================================== 00:20:59.608 [2024-11-20T06:17:04.164Z] Total : 2699.03 168.69 0.00 0.00 211788.67 1780.87 253481.85 00:20:59.868 07:17:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:21:00.803 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1248782 00:21:00.803 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:21:00.803 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:00.803 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:00.803 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:00.803 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:00.803 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:00.803 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:21:00.803 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:00.803 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:21:00.803 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:00.803 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:00.803 rmmod nvme_tcp 00:21:00.803 rmmod nvme_fabrics 00:21:00.803 rmmod nvme_keyring 00:21:00.803 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:00.803 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:21:00.803 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:21:00.803 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 1248782 ']' 00:21:00.803 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 1248782 00:21:00.803 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 1248782 ']' 00:21:00.803 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 1248782 00:21:00.803 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:21:00.803 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:00.803 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1248782 00:21:00.803 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:00.804 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:00.804 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1248782' 00:21:00.804 killing process with pid 1248782 00:21:00.804 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 1248782 00:21:00.804 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 1248782 00:21:01.372 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:01.372 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:01.372 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:01.372 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:21:01.372 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:21:01.372 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:01.372 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:21:01.372 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:01.372 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:01.372 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:01.372 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:01.372 07:17:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:03.281 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:03.281 00:21:03.281 real 0m8.012s 00:21:03.281 user 0m24.509s 00:21:03.281 sys 0m1.322s 00:21:03.281 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:03.281 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:03.281 ************************************ 00:21:03.281 END TEST nvmf_shutdown_tc2 00:21:03.281 ************************************ 00:21:03.281 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:03.281 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:21:03.281 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:03.281 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:03.541 ************************************ 00:21:03.541 START TEST nvmf_shutdown_tc3 00:21:03.541 ************************************ 00:21:03.541 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc3 00:21:03.541 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:21:03.541 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:03.541 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:03.541 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:03.541 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:03.542 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:03.542 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:03.542 Found net devices under 0000:86:00.0: cvl_0_0 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.542 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:03.542 Found net devices under 0000:86:00.1: cvl_0_1 00:21:03.543 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.543 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:03.543 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:03.543 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:03.543 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:03.543 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:03.543 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:03.543 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:03.543 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:03.543 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:03.543 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:03.543 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:03.543 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:03.543 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:03.543 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:03.543 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:03.543 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:03.543 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:03.543 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:03.543 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:03.543 07:17:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:03.543 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:03.543 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:03.543 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:03.543 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:03.803 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:03.803 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:03.803 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:03.803 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:03.803 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:03.803 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.380 ms 00:21:03.803 00:21:03.803 --- 10.0.0.2 ping statistics --- 00:21:03.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:03.803 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:21:03.803 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:03.803 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:03.803 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:21:03.803 00:21:03.803 --- 10.0.0.1 ping statistics --- 00:21:03.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:03.803 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:21:03.803 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:03.803 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:21:03.803 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:03.803 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:03.803 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:03.803 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:03.803 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:03.803 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:03.803 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:03.803 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:03.803 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:03.803 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:03.803 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:03.803 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1250328 00:21:03.803 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1250328 00:21:03.803 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:03.803 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 1250328 ']' 00:21:03.803 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.803 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:03.803 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.803 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:03.803 07:17:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:03.803 [2024-11-20 07:17:08.225342] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:21:03.803 [2024-11-20 07:17:08.225386] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:03.803 [2024-11-20 07:17:08.304911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:03.803 [2024-11-20 07:17:08.348021] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:03.803 [2024-11-20 07:17:08.348059] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:03.803 [2024-11-20 07:17:08.348066] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:03.803 [2024-11-20 07:17:08.348072] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:03.803 [2024-11-20 07:17:08.348078] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:03.803 [2024-11-20 07:17:08.349685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:03.803 [2024-11-20 07:17:08.349792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:03.803 [2024-11-20 07:17:08.349896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:03.803 [2024-11-20 07:17:08.349897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:04.739 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:04.739 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:21:04.739 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:04.739 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:04.739 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:04.739 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:04.739 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:04.739 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.739 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:04.739 [2024-11-20 07:17:09.106602] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:04.739 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.739 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:04.739 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:04.739 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:04.739 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:04.739 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:04.739 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:04.739 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:04.739 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:04.739 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:04.739 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:04.739 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:04.739 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:04.739 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:04.739 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:04.739 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:04.739 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:04.739 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:04.739 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:04.739 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:04.739 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:04.739 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:04.739 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:04.740 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:04.740 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:04.740 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:04.740 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:04.740 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.740 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:04.740 Malloc1 00:21:04.740 [2024-11-20 07:17:09.222965] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:04.740 Malloc2 00:21:04.740 Malloc3 00:21:04.997 Malloc4 00:21:04.997 Malloc5 00:21:04.997 Malloc6 00:21:04.997 Malloc7 00:21:04.998 Malloc8 00:21:05.257 Malloc9 00:21:05.257 Malloc10 00:21:05.257 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.257 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:05.257 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:05.257 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:05.257 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1250607 00:21:05.257 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1250607 /var/tmp/bdevperf.sock 00:21:05.257 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 1250607 ']' 00:21:05.258 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:05.258 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:05.258 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:05.258 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:05.258 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:05.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:05.258 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:21:05.258 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:05.258 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:21:05.258 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:05.258 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:05.258 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:05.258 { 00:21:05.258 "params": { 00:21:05.258 "name": "Nvme$subsystem", 00:21:05.258 "trtype": "$TEST_TRANSPORT", 00:21:05.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.258 "adrfam": "ipv4", 00:21:05.258 "trsvcid": "$NVMF_PORT", 00:21:05.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.258 "hdgst": ${hdgst:-false}, 00:21:05.258 "ddgst": ${ddgst:-false} 00:21:05.258 }, 00:21:05.258 "method": "bdev_nvme_attach_controller" 00:21:05.258 } 00:21:05.258 EOF 00:21:05.258 )") 00:21:05.258 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:05.258 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:05.258 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:05.258 { 00:21:05.258 "params": { 00:21:05.258 "name": "Nvme$subsystem", 00:21:05.258 "trtype": "$TEST_TRANSPORT", 00:21:05.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.258 "adrfam": "ipv4", 00:21:05.258 "trsvcid": "$NVMF_PORT", 00:21:05.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.258 "hdgst": ${hdgst:-false}, 00:21:05.258 "ddgst": ${ddgst:-false} 00:21:05.258 }, 00:21:05.258 "method": "bdev_nvme_attach_controller" 00:21:05.258 } 00:21:05.258 EOF 00:21:05.258 )") 00:21:05.258 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:05.258 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:05.258 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:05.258 { 00:21:05.258 "params": { 00:21:05.258 "name": "Nvme$subsystem", 00:21:05.258 "trtype": "$TEST_TRANSPORT", 00:21:05.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.258 "adrfam": "ipv4", 00:21:05.258 "trsvcid": "$NVMF_PORT", 00:21:05.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.258 "hdgst": ${hdgst:-false}, 00:21:05.258 "ddgst": ${ddgst:-false} 00:21:05.258 }, 00:21:05.258 "method": "bdev_nvme_attach_controller" 00:21:05.258 } 00:21:05.258 EOF 00:21:05.258 )") 00:21:05.258 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:05.258 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:05.258 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:05.258 { 00:21:05.258 "params": { 00:21:05.258 "name": "Nvme$subsystem", 00:21:05.258 "trtype": "$TEST_TRANSPORT", 00:21:05.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.258 "adrfam": "ipv4", 00:21:05.258 "trsvcid": "$NVMF_PORT", 00:21:05.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.258 "hdgst": ${hdgst:-false}, 00:21:05.258 "ddgst": ${ddgst:-false} 00:21:05.258 }, 00:21:05.258 "method": "bdev_nvme_attach_controller" 00:21:05.258 } 00:21:05.258 EOF 00:21:05.258 )") 00:21:05.258 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:05.258 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:05.258 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:05.258 { 00:21:05.258 "params": { 00:21:05.258 "name": "Nvme$subsystem", 00:21:05.258 "trtype": "$TEST_TRANSPORT", 00:21:05.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.258 "adrfam": "ipv4", 00:21:05.258 "trsvcid": "$NVMF_PORT", 00:21:05.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.258 "hdgst": ${hdgst:-false}, 00:21:05.258 "ddgst": ${ddgst:-false} 00:21:05.258 }, 00:21:05.258 "method": "bdev_nvme_attach_controller" 00:21:05.258 } 00:21:05.258 EOF 00:21:05.258 )") 00:21:05.258 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:05.258 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:05.258 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:05.258 { 00:21:05.258 "params": { 00:21:05.258 "name": "Nvme$subsystem", 00:21:05.258 "trtype": "$TEST_TRANSPORT", 00:21:05.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.258 "adrfam": "ipv4", 00:21:05.258 "trsvcid": "$NVMF_PORT", 00:21:05.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.258 "hdgst": ${hdgst:-false}, 00:21:05.258 "ddgst": ${ddgst:-false} 00:21:05.258 }, 00:21:05.258 "method": "bdev_nvme_attach_controller" 00:21:05.258 } 00:21:05.258 EOF 00:21:05.258 )") 00:21:05.258 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:05.258 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:05.258 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:05.258 { 00:21:05.258 "params": { 00:21:05.258 "name": "Nvme$subsystem", 00:21:05.258 "trtype": "$TEST_TRANSPORT", 00:21:05.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.258 "adrfam": "ipv4", 00:21:05.258 "trsvcid": "$NVMF_PORT", 00:21:05.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.258 "hdgst": ${hdgst:-false}, 00:21:05.258 "ddgst": ${ddgst:-false} 00:21:05.258 }, 00:21:05.258 "method": "bdev_nvme_attach_controller" 00:21:05.258 } 00:21:05.258 EOF 00:21:05.258 )") 00:21:05.258 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:05.258 [2024-11-20 07:17:09.701004] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:21:05.258 [2024-11-20 07:17:09.701054] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1250607 ] 00:21:05.258 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:05.258 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:05.258 { 00:21:05.258 "params": { 00:21:05.258 "name": "Nvme$subsystem", 00:21:05.258 "trtype": "$TEST_TRANSPORT", 00:21:05.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.258 "adrfam": "ipv4", 00:21:05.258 "trsvcid": "$NVMF_PORT", 00:21:05.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.259 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.259 "hdgst": ${hdgst:-false}, 00:21:05.259 "ddgst": ${ddgst:-false} 00:21:05.259 }, 00:21:05.259 "method": "bdev_nvme_attach_controller" 00:21:05.259 } 00:21:05.259 EOF 00:21:05.259 )") 00:21:05.259 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:05.259 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:05.259 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:05.259 { 00:21:05.259 "params": { 00:21:05.259 "name": "Nvme$subsystem", 00:21:05.259 "trtype": "$TEST_TRANSPORT", 00:21:05.259 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.259 "adrfam": "ipv4", 00:21:05.259 "trsvcid": "$NVMF_PORT", 00:21:05.259 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.259 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.259 "hdgst": ${hdgst:-false}, 00:21:05.259 "ddgst": ${ddgst:-false} 00:21:05.259 }, 00:21:05.259 "method": "bdev_nvme_attach_controller" 00:21:05.259 } 00:21:05.259 EOF 00:21:05.259 )") 00:21:05.259 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:05.259 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:05.259 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:05.259 { 00:21:05.259 "params": { 00:21:05.259 "name": "Nvme$subsystem", 00:21:05.259 "trtype": "$TEST_TRANSPORT", 00:21:05.259 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.259 "adrfam": "ipv4", 00:21:05.259 "trsvcid": "$NVMF_PORT", 00:21:05.259 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.259 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.259 "hdgst": ${hdgst:-false}, 00:21:05.259 "ddgst": ${ddgst:-false} 00:21:05.259 }, 00:21:05.259 "method": "bdev_nvme_attach_controller" 00:21:05.259 } 00:21:05.259 EOF 00:21:05.259 )") 00:21:05.259 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:05.259 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:21:05.259 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:21:05.259 07:17:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:05.259 "params": { 00:21:05.259 "name": "Nvme1", 00:21:05.259 "trtype": "tcp", 00:21:05.259 "traddr": "10.0.0.2", 00:21:05.259 "adrfam": "ipv4", 00:21:05.259 "trsvcid": "4420", 00:21:05.259 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.259 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:05.259 "hdgst": false, 00:21:05.259 "ddgst": false 00:21:05.259 }, 00:21:05.259 "method": "bdev_nvme_attach_controller" 00:21:05.259 },{ 00:21:05.259 "params": { 00:21:05.259 "name": "Nvme2", 00:21:05.259 "trtype": "tcp", 00:21:05.259 "traddr": "10.0.0.2", 00:21:05.259 "adrfam": "ipv4", 00:21:05.259 "trsvcid": "4420", 00:21:05.259 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:05.259 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:05.259 "hdgst": false, 00:21:05.259 "ddgst": false 00:21:05.259 }, 00:21:05.259 "method": "bdev_nvme_attach_controller" 00:21:05.259 },{ 00:21:05.259 "params": { 00:21:05.259 "name": "Nvme3", 00:21:05.259 "trtype": "tcp", 00:21:05.259 "traddr": "10.0.0.2", 00:21:05.259 "adrfam": "ipv4", 00:21:05.259 "trsvcid": "4420", 00:21:05.259 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:05.259 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:05.259 "hdgst": false, 00:21:05.259 "ddgst": false 00:21:05.259 }, 00:21:05.259 "method": "bdev_nvme_attach_controller" 00:21:05.259 },{ 00:21:05.259 "params": { 00:21:05.259 "name": "Nvme4", 00:21:05.259 "trtype": "tcp", 00:21:05.259 "traddr": "10.0.0.2", 00:21:05.259 "adrfam": "ipv4", 00:21:05.259 "trsvcid": "4420", 00:21:05.259 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:05.259 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:05.259 "hdgst": false, 00:21:05.259 "ddgst": false 00:21:05.259 }, 00:21:05.259 "method": "bdev_nvme_attach_controller" 00:21:05.259 },{ 00:21:05.259 "params": { 00:21:05.259 "name": "Nvme5", 00:21:05.259 "trtype": "tcp", 00:21:05.259 "traddr": "10.0.0.2", 00:21:05.259 "adrfam": "ipv4", 00:21:05.259 "trsvcid": "4420", 00:21:05.259 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:05.259 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:05.259 "hdgst": false, 00:21:05.259 "ddgst": false 00:21:05.259 }, 00:21:05.259 "method": "bdev_nvme_attach_controller" 00:21:05.259 },{ 00:21:05.259 "params": { 00:21:05.259 "name": "Nvme6", 00:21:05.259 "trtype": "tcp", 00:21:05.259 "traddr": "10.0.0.2", 00:21:05.259 "adrfam": "ipv4", 00:21:05.259 "trsvcid": "4420", 00:21:05.259 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:05.259 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:05.259 "hdgst": false, 00:21:05.259 "ddgst": false 00:21:05.259 }, 00:21:05.259 "method": "bdev_nvme_attach_controller" 00:21:05.259 },{ 00:21:05.259 "params": { 00:21:05.259 "name": "Nvme7", 00:21:05.259 "trtype": "tcp", 00:21:05.259 "traddr": "10.0.0.2", 00:21:05.259 "adrfam": "ipv4", 00:21:05.259 "trsvcid": "4420", 00:21:05.259 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:05.259 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:05.259 "hdgst": false, 00:21:05.259 "ddgst": false 00:21:05.259 }, 00:21:05.259 "method": "bdev_nvme_attach_controller" 00:21:05.259 },{ 00:21:05.259 "params": { 00:21:05.259 "name": "Nvme8", 00:21:05.259 "trtype": "tcp", 00:21:05.259 "traddr": "10.0.0.2", 00:21:05.259 "adrfam": "ipv4", 00:21:05.259 "trsvcid": "4420", 00:21:05.259 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:05.259 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:05.259 "hdgst": false, 00:21:05.259 "ddgst": false 00:21:05.259 }, 00:21:05.259 "method": "bdev_nvme_attach_controller" 00:21:05.259 },{ 00:21:05.259 "params": { 00:21:05.259 "name": "Nvme9", 00:21:05.259 "trtype": "tcp", 00:21:05.259 "traddr": "10.0.0.2", 00:21:05.259 "adrfam": "ipv4", 00:21:05.259 "trsvcid": "4420", 00:21:05.259 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:05.259 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:05.259 "hdgst": false, 00:21:05.259 "ddgst": false 00:21:05.259 }, 00:21:05.259 "method": "bdev_nvme_attach_controller" 00:21:05.259 },{ 00:21:05.259 "params": { 00:21:05.259 "name": "Nvme10", 00:21:05.259 "trtype": "tcp", 00:21:05.259 "traddr": "10.0.0.2", 00:21:05.259 "adrfam": "ipv4", 00:21:05.259 "trsvcid": "4420", 00:21:05.259 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:05.259 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:05.259 "hdgst": false, 00:21:05.259 "ddgst": false 00:21:05.259 }, 00:21:05.259 "method": "bdev_nvme_attach_controller" 00:21:05.259 }' 00:21:05.259 [2024-11-20 07:17:09.779645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.533 [2024-11-20 07:17:09.821700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.447 Running I/O for 10 seconds... 00:21:07.447 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:07.447 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:21:07.447 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:07.447 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.447 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:07.447 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.448 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:07.448 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:07.448 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:07.448 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:07.448 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:21:07.448 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:21:07.448 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:07.448 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:07.448 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:07.448 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:07.448 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.448 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:07.448 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.448 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:21:07.448 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:21:07.448 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:07.448 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:07.448 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:07.448 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:07.448 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:07.448 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.448 07:17:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:07.707 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.707 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=85 00:21:07.707 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 85 -ge 100 ']' 00:21:07.707 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:07.982 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:07.982 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:07.982 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:07.982 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.982 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:07.982 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:07.982 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.982 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=195 00:21:07.982 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:21:07.982 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:21:07.982 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:21:07.982 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:21:07.983 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1250328 00:21:07.983 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 1250328 ']' 00:21:07.983 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 1250328 00:21:07.983 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # uname 00:21:07.983 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:07.983 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1250328 00:21:07.983 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:07.983 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:07.983 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1250328' 00:21:07.983 killing process with pid 1250328 00:21:07.983 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # kill 1250328 00:21:07.983 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@976 -- # wait 1250328 00:21:07.983 [2024-11-20 07:17:12.381316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.983 [2024-11-20 07:17:12.381747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.381753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.381759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.381765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.381771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.381777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6050 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.383698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.984 [2024-11-20 07:17:12.383729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.984 [2024-11-20 07:17:12.383739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.984 [2024-11-20 07:17:12.383746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.984 [2024-11-20 07:17:12.383753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.984 [2024-11-20 07:17:12.383760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.984 [2024-11-20 07:17:12.383768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.984 [2024-11-20 07:17:12.383775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.984 [2024-11-20 07:17:12.383782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfec1b0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.984 [2024-11-20 07:17:12.384372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.384378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.384384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.384391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.384397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.384403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.384409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.384415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.384421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.384427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13688e0 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.385798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.385819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.385830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.385837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.385843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.385849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.385857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.385865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.385872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.385878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.385885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.385892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.385898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.385904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.385910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.385917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.385923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.385929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.385935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.385941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.385954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.385961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.385967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.385973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.385979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.385985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.385991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.385997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.386004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.386012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.386020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.386027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.386034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.386041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.386047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.386053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.386059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.386065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.386071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.386078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.386083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.386090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.386097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.386103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.386109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.386115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.386121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.386128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.386133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.386139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.386146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.386152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.386158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.386165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.386171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.386177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.386185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.386191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.386197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.985 [2024-11-20 07:17:12.386203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.386210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.386217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.386224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6520 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.387121] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:07.986 [2024-11-20 07:17:12.387659] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:07.986 [2024-11-20 07:17:12.388999] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:07.986 [2024-11-20 07:17:12.390312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.390716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f69f0 is same with the state(6) to be set 00:21:07.986 [2024-11-20 07:17:12.393745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.393778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.393786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.393792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.393799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.393805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.393811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.393817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.393823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.393834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.393840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.393847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.393853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.393859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.393865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.393871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.393877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.393883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.393890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.393896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.393903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.393909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.393915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.393922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.393928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.393934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.393940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.393950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.393956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.393962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.393968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.393974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.393980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.393987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.393993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.393999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.394005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.394013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.394019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.394025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.394032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.394038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.394045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.394051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.394057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.394063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.394070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.394076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.394082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.394087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.394093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.394100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.394105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.394111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.394117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.394123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.394129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.394135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.394141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.394147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.394153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.394160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.394166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f6ee0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.394796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f73b0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.394825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f73b0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.394833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f73b0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.394839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f73b0 is same with the state(6) to be set 00:21:07.987 [2024-11-20 07:17:12.395450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.988 [2024-11-20 07:17:12.395800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.395807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.395813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.395819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.395825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.395831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.395837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.395842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.395848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.395854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7730 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.396852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.396867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.396874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.396880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.396886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.396892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.396898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.396904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.396910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.396916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.396922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.396927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.396933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.396939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.396945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.396958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.396964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.396970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.396976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.396981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.396988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.396994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.397000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.397006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.397011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.397017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.397025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.397031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.397038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.397043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.397049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.397055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.397061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.397068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.397074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.397080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.397086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.397093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.397099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.397104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.397110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.397117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.397125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.397131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.397138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.397144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.397150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.397155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.397162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.397168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.397174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.397180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.397186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.397193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.397199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.397205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.397210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.397216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.989 [2024-11-20 07:17:12.397223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.397228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.397234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.397241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.397247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f7c00 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.398430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f80f0 is same with the state(6) to be set 00:21:07.990 [2024-11-20 07:17:12.407917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.990 [2024-11-20 07:17:12.407943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.991 [2024-11-20 07:17:12.407959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.991 [2024-11-20 07:17:12.407967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.991 [2024-11-20 07:17:12.407974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.991 [2024-11-20 07:17:12.407981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.991 [2024-11-20 07:17:12.407989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.991 [2024-11-20 07:17:12.407996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.991 [2024-11-20 07:17:12.408003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446110 is same with the state(6) to be set 00:21:07.991 [2024-11-20 07:17:12.408037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.991 [2024-11-20 07:17:12.408046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.991 [2024-11-20 07:17:12.408054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.991 [2024-11-20 07:17:12.408060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.991 [2024-11-20 07:17:12.408068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.991 [2024-11-20 07:17:12.408075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.991 [2024-11-20 07:17:12.408082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.991 [2024-11-20 07:17:12.408089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.991 [2024-11-20 07:17:12.408096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143d2f0 is same with the state(6) to be set 00:21:07.991 [2024-11-20 07:17:12.408122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.991 [2024-11-20 07:17:12.408131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.991 [2024-11-20 07:17:12.408138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.991 [2024-11-20 07:17:12.408145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.991 [2024-11-20 07:17:12.408152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.991 [2024-11-20 07:17:12.408163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.991 [2024-11-20 07:17:12.408170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.991 [2024-11-20 07:17:12.408177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.991 [2024-11-20 07:17:12.408183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1417930 is same with the state(6) to be set 00:21:07.991 [2024-11-20 07:17:12.408205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.991 [2024-11-20 07:17:12.408213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.991 [2024-11-20 07:17:12.408220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.991 [2024-11-20 07:17:12.408227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.991 [2024-11-20 07:17:12.408234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.991 [2024-11-20 07:17:12.408241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.991 [2024-11-20 07:17:12.408248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.991 [2024-11-20 07:17:12.408254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.991 [2024-11-20 07:17:12.408261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfebd30 is same with the state(6) to be set 00:21:07.991 [2024-11-20 07:17:12.408284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.991 [2024-11-20 07:17:12.408293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.991 [2024-11-20 07:17:12.408300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.991 [2024-11-20 07:17:12.408307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.991 [2024-11-20 07:17:12.408314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.991 [2024-11-20 07:17:12.408321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.991 [2024-11-20 07:17:12.408328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.991 [2024-11-20 07:17:12.408335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.991 [2024-11-20 07:17:12.408341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe8ce0 is same with the state(6) to be set 00:21:07.991 [2024-11-20 07:17:12.408364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.991 [2024-11-20 07:17:12.408372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.991 [2024-11-20 07:17:12.408380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.991 [2024-11-20 07:17:12.408387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.991 [2024-11-20 07:17:12.408397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.991 [2024-11-20 07:17:12.408404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.991 [2024-11-20 07:17:12.408411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.991 [2024-11-20 07:17:12.408418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.991 [2024-11-20 07:17:12.408425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a750 is same with the state(6) to be set 00:21:07.991 [2024-11-20 07:17:12.408450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.991 [2024-11-20 07:17:12.408459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.991 [2024-11-20 07:17:12.408467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.991 [2024-11-20 07:17:12.408474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.991 [2024-11-20 07:17:12.408481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.991 [2024-11-20 07:17:12.408488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.991 [2024-11-20 07:17:12.408495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.991 [2024-11-20 07:17:12.408502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.991 [2024-11-20 07:17:12.408509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140ce30 is same with the state(6) to be set 00:21:07.991 [2024-11-20 07:17:12.408528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfec1b0 (9): Bad file descriptor 00:21:07.991 [2024-11-20 07:17:12.408554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.991 [2024-11-20 07:17:12.408562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.991 [2024-11-20 07:17:12.408570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.991 [2024-11-20 07:17:12.408577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.992 [2024-11-20 07:17:12.408584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.992 [2024-11-20 07:17:12.408591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.992 [2024-11-20 07:17:12.408598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.992 [2024-11-20 07:17:12.408605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.992 [2024-11-20 07:17:12.408611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe01d0 is same with the state(6) to be set 00:21:07.992 [2024-11-20 07:17:12.408634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.992 [2024-11-20 07:17:12.408648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.992 [2024-11-20 07:17:12.408656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.992 [2024-11-20 07:17:12.408663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.992 [2024-11-20 07:17:12.408675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.992 [2024-11-20 07:17:12.408682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.992 [2024-11-20 07:17:12.408689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.992 [2024-11-20 07:17:12.408696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.992 [2024-11-20 07:17:12.408703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d010 is same with the state(6) to be set 00:21:07.992 [2024-11-20 07:17:12.409127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.992 [2024-11-20 07:17:12.409148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.992 [2024-11-20 07:17:12.409163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.992 [2024-11-20 07:17:12.409170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.992 [2024-11-20 07:17:12.409179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.992 [2024-11-20 07:17:12.409187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.992 [2024-11-20 07:17:12.409195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.992 [2024-11-20 07:17:12.409202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.992 [2024-11-20 07:17:12.409210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.992 [2024-11-20 07:17:12.409217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.992 [2024-11-20 07:17:12.409225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.992 [2024-11-20 07:17:12.409232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.992 [2024-11-20 07:17:12.409240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.992 [2024-11-20 07:17:12.409247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.992 [2024-11-20 07:17:12.409255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.992 [2024-11-20 07:17:12.409262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.992 [2024-11-20 07:17:12.409270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.992 [2024-11-20 07:17:12.409276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.992 [2024-11-20 07:17:12.409288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.992 [2024-11-20 07:17:12.409295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.992 [2024-11-20 07:17:12.409303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.992 [2024-11-20 07:17:12.409310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.992 [2024-11-20 07:17:12.409318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.992 [2024-11-20 07:17:12.409324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.992 [2024-11-20 07:17:12.409333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.992 [2024-11-20 07:17:12.409340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.992 [2024-11-20 07:17:12.409348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.992 [2024-11-20 07:17:12.409356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.992 [2024-11-20 07:17:12.409364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.992 [2024-11-20 07:17:12.409371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.992 [2024-11-20 07:17:12.409379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.992 [2024-11-20 07:17:12.409386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.992 [2024-11-20 07:17:12.409394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.992 [2024-11-20 07:17:12.409401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.992 [2024-11-20 07:17:12.409409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.992 [2024-11-20 07:17:12.409416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.992 [2024-11-20 07:17:12.409424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.992 [2024-11-20 07:17:12.409431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.992 [2024-11-20 07:17:12.409439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.992 [2024-11-20 07:17:12.409446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.992 [2024-11-20 07:17:12.409454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.992 [2024-11-20 07:17:12.409461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.992 [2024-11-20 07:17:12.409469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.992 [2024-11-20 07:17:12.409477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.992 [2024-11-20 07:17:12.409485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.992 [2024-11-20 07:17:12.409492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.992 [2024-11-20 07:17:12.409500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.992 [2024-11-20 07:17:12.409507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.992 [2024-11-20 07:17:12.409515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.992 [2024-11-20 07:17:12.409521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.993 [2024-11-20 07:17:12.409530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.993 [2024-11-20 07:17:12.409536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.993 [2024-11-20 07:17:12.409544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.993 [2024-11-20 07:17:12.409551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.993 [2024-11-20 07:17:12.409559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.993 [2024-11-20 07:17:12.409566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.993 [2024-11-20 07:17:12.409574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.993 [2024-11-20 07:17:12.409580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.993 [2024-11-20 07:17:12.409590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.993 [2024-11-20 07:17:12.409597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.993 [2024-11-20 07:17:12.409605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.993 [2024-11-20 07:17:12.409612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.993 [2024-11-20 07:17:12.409620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.993 [2024-11-20 07:17:12.409627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.993 [2024-11-20 07:17:12.409635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.993 [2024-11-20 07:17:12.409642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.993 [2024-11-20 07:17:12.409650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.993 [2024-11-20 07:17:12.409657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.993 [2024-11-20 07:17:12.409667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.993 [2024-11-20 07:17:12.409674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.993 [2024-11-20 07:17:12.409682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.993 [2024-11-20 07:17:12.409689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.993 [2024-11-20 07:17:12.409697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.993 [2024-11-20 07:17:12.409704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.993 [2024-11-20 07:17:12.409712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.993 [2024-11-20 07:17:12.409719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.993 [2024-11-20 07:17:12.409727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.993 [2024-11-20 07:17:12.409734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.993 [2024-11-20 07:17:12.409742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.993 [2024-11-20 07:17:12.409748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.993 [2024-11-20 07:17:12.409756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.993 [2024-11-20 07:17:12.409763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.993 [2024-11-20 07:17:12.409771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.993 [2024-11-20 07:17:12.409779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.993 [2024-11-20 07:17:12.409787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.993 [2024-11-20 07:17:12.409794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.993 [2024-11-20 07:17:12.409802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.993 [2024-11-20 07:17:12.409809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.993 [2024-11-20 07:17:12.409817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.993 [2024-11-20 07:17:12.409823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.993 [2024-11-20 07:17:12.409831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.993 [2024-11-20 07:17:12.409838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.993 [2024-11-20 07:17:12.409846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.993 [2024-11-20 07:17:12.409855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.993 [2024-11-20 07:17:12.409863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.993 [2024-11-20 07:17:12.409870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.993 [2024-11-20 07:17:12.409878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.993 [2024-11-20 07:17:12.409884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.993 [2024-11-20 07:17:12.409892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.993 [2024-11-20 07:17:12.409899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.993 [2024-11-20 07:17:12.409907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.993 [2024-11-20 07:17:12.409914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.993 [2024-11-20 07:17:12.409922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.993 [2024-11-20 07:17:12.409928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.993 [2024-11-20 07:17:12.409936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.994 [2024-11-20 07:17:12.409943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.994 [2024-11-20 07:17:12.409957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.994 [2024-11-20 07:17:12.409964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.994 [2024-11-20 07:17:12.409972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.994 [2024-11-20 07:17:12.409979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.994 [2024-11-20 07:17:12.409987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.994 [2024-11-20 07:17:12.409994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.994 [2024-11-20 07:17:12.410002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.994 [2024-11-20 07:17:12.410008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.994 [2024-11-20 07:17:12.410017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.994 [2024-11-20 07:17:12.410023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.994 [2024-11-20 07:17:12.410032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.994 [2024-11-20 07:17:12.410039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.994 [2024-11-20 07:17:12.410049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.994 [2024-11-20 07:17:12.410055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.994 [2024-11-20 07:17:12.410064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.994 [2024-11-20 07:17:12.410070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.994 [2024-11-20 07:17:12.410078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.994 [2024-11-20 07:17:12.410085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.994 [2024-11-20 07:17:12.410093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.994 [2024-11-20 07:17:12.410100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.994 [2024-11-20 07:17:12.410108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.994 [2024-11-20 07:17:12.410115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.994 [2024-11-20 07:17:12.410140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:07.994 [2024-11-20 07:17:12.410390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.994 [2024-11-20 07:17:12.410406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.994 [2024-11-20 07:17:12.410418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.994 [2024-11-20 07:17:12.410425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.994 [2024-11-20 07:17:12.410434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.994 [2024-11-20 07:17:12.410441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.994 [2024-11-20 07:17:12.410450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.994 [2024-11-20 07:17:12.410457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.994 [2024-11-20 07:17:12.410465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.994 [2024-11-20 07:17:12.410472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.994 [2024-11-20 07:17:12.410480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.994 [2024-11-20 07:17:12.410487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.994 [2024-11-20 07:17:12.410496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.994 [2024-11-20 07:17:12.410503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.994 [2024-11-20 07:17:12.410514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.994 [2024-11-20 07:17:12.410521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.994 [2024-11-20 07:17:12.410531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.994 [2024-11-20 07:17:12.410539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.994 [2024-11-20 07:17:12.410547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.994 [2024-11-20 07:17:12.410554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.994 [2024-11-20 07:17:12.410563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.994 [2024-11-20 07:17:12.410570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.994 [2024-11-20 07:17:12.410578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.994 [2024-11-20 07:17:12.410585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.994 [2024-11-20 07:17:12.410593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.994 [2024-11-20 07:17:12.410601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.994 [2024-11-20 07:17:12.410609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.994 [2024-11-20 07:17:12.410616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.994 [2024-11-20 07:17:12.410624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.994 [2024-11-20 07:17:12.410637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.994 [2024-11-20 07:17:12.410645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.994 [2024-11-20 07:17:12.410652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.994 [2024-11-20 07:17:12.410660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.994 [2024-11-20 07:17:12.410667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.994 [2024-11-20 07:17:12.410675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.994 [2024-11-20 07:17:12.410682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.994 [2024-11-20 07:17:12.410691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.994 [2024-11-20 07:17:12.410698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.994 [2024-11-20 07:17:12.410706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.994 [2024-11-20 07:17:12.410714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.994 [2024-11-20 07:17:12.410723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.995 [2024-11-20 07:17:12.410730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.995 [2024-11-20 07:17:12.410738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.995 [2024-11-20 07:17:12.410745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.995 [2024-11-20 07:17:12.410753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.995 [2024-11-20 07:17:12.410760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.995 [2024-11-20 07:17:12.410768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.995 [2024-11-20 07:17:12.410775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.995 [2024-11-20 07:17:12.410783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.995 [2024-11-20 07:17:12.410790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.995 [2024-11-20 07:17:12.410798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.995 [2024-11-20 07:17:12.410805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.995 [2024-11-20 07:17:12.410814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.995 [2024-11-20 07:17:12.410821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.995 [2024-11-20 07:17:12.410829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.995 [2024-11-20 07:17:12.410836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.995 [2024-11-20 07:17:12.410845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.995 [2024-11-20 07:17:12.410852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.995 [2024-11-20 07:17:12.410860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.995 [2024-11-20 07:17:12.410867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.995 [2024-11-20 07:17:12.410875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.995 [2024-11-20 07:17:12.410882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.995 [2024-11-20 07:17:12.410891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.995 [2024-11-20 07:17:12.410897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.995 [2024-11-20 07:17:12.410907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.995 [2024-11-20 07:17:12.410914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.995 [2024-11-20 07:17:12.410922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.995 [2024-11-20 07:17:12.410929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.995 [2024-11-20 07:17:12.410937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.995 [2024-11-20 07:17:12.410944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.995 [2024-11-20 07:17:12.410959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.995 [2024-11-20 07:17:12.410966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.995 [2024-11-20 07:17:12.410974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.995 [2024-11-20 07:17:12.410980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.995 [2024-11-20 07:17:12.410989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.995 [2024-11-20 07:17:12.410996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.995 [2024-11-20 07:17:12.411004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.995 [2024-11-20 07:17:12.411011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.995 [2024-11-20 07:17:12.411019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.995 [2024-11-20 07:17:12.411026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.995 [2024-11-20 07:17:12.411034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.995 [2024-11-20 07:17:12.411041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.995 [2024-11-20 07:17:12.411049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.995 [2024-11-20 07:17:12.411056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.995 [2024-11-20 07:17:12.411064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.995 [2024-11-20 07:17:12.411071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.995 [2024-11-20 07:17:12.411080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.995 [2024-11-20 07:17:12.411086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.995 [2024-11-20 07:17:12.411095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.995 [2024-11-20 07:17:12.411104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.995 [2024-11-20 07:17:12.411112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.995 [2024-11-20 07:17:12.411119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.995 [2024-11-20 07:17:12.411127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.995 [2024-11-20 07:17:12.411134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.995 [2024-11-20 07:17:12.411142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.995 [2024-11-20 07:17:12.411149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.995 [2024-11-20 07:17:12.411158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.995 [2024-11-20 07:17:12.411165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.995 [2024-11-20 07:17:12.411173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.995 [2024-11-20 07:17:12.411180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.995 [2024-11-20 07:17:12.411189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.995 [2024-11-20 07:17:12.411196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.995 [2024-11-20 07:17:12.411204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.995 [2024-11-20 07:17:12.411211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.995 [2024-11-20 07:17:12.411220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.996 [2024-11-20 07:17:12.411227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.996 [2024-11-20 07:17:12.411235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.996 [2024-11-20 07:17:12.411242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.996 [2024-11-20 07:17:12.411250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.996 [2024-11-20 07:17:12.411257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.996 [2024-11-20 07:17:12.411265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.996 [2024-11-20 07:17:12.411272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.996 [2024-11-20 07:17:12.411280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.996 [2024-11-20 07:17:12.411288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.996 [2024-11-20 07:17:12.411297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.996 [2024-11-20 07:17:12.411303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.996 [2024-11-20 07:17:12.411312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.996 [2024-11-20 07:17:12.411318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.996 [2024-11-20 07:17:12.411327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.996 [2024-11-20 07:17:12.411333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.996 [2024-11-20 07:17:12.411341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.996 [2024-11-20 07:17:12.411348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.996 [2024-11-20 07:17:12.411356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.996 [2024-11-20 07:17:12.411363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.996 [2024-11-20 07:17:12.411371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.996 [2024-11-20 07:17:12.411378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.996 [2024-11-20 07:17:12.411386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.996 [2024-11-20 07:17:12.411392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.996 [2024-11-20 07:17:12.411400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x122e700 is same with the state(6) to be set 00:21:07.996 [2024-11-20 07:17:12.413507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:07.996 [2024-11-20 07:17:12.413541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1417930 (9): Bad file descriptor 00:21:07.996 [2024-11-20 07:17:12.413822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:21:07.996 [2024-11-20 07:17:12.413842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1446110 (9): Bad file descriptor 00:21:07.996 [2024-11-20 07:17:12.414870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.996 [2024-11-20 07:17:12.414893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1417930 with addr=10.0.0.2, port=4420 00:21:07.996 [2024-11-20 07:17:12.414902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1417930 is same with the state(6) to be set 00:21:07.996 [2024-11-20 07:17:12.414967] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:07.996 [2024-11-20 07:17:12.415012] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:07.996 [2024-11-20 07:17:12.415050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.996 [2024-11-20 07:17:12.415060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.996 [2024-11-20 07:17:12.415077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.996 [2024-11-20 07:17:12.415084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.996 [2024-11-20 07:17:12.415094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.996 [2024-11-20 07:17:12.415101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.996 [2024-11-20 07:17:12.415109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.996 [2024-11-20 07:17:12.415116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.996 [2024-11-20 07:17:12.415124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.996 [2024-11-20 07:17:12.415131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.996 [2024-11-20 07:17:12.415140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.996 [2024-11-20 07:17:12.415146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.996 [2024-11-20 07:17:12.415155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.996 [2024-11-20 07:17:12.415161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.996 [2024-11-20 07:17:12.415170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.996 [2024-11-20 07:17:12.415176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.996 [2024-11-20 07:17:12.415185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.996 [2024-11-20 07:17:12.415192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.996 [2024-11-20 07:17:12.415200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.996 [2024-11-20 07:17:12.415207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.996 [2024-11-20 07:17:12.415216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.996 [2024-11-20 07:17:12.415222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.996 [2024-11-20 07:17:12.415231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.996 [2024-11-20 07:17:12.415237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.996 [2024-11-20 07:17:12.415245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.996 [2024-11-20 07:17:12.415252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.996 [2024-11-20 07:17:12.415260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.996 [2024-11-20 07:17:12.415269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.996 [2024-11-20 07:17:12.415277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.996 [2024-11-20 07:17:12.415284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.996 [2024-11-20 07:17:12.415292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.996 [2024-11-20 07:17:12.415299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.996 [2024-11-20 07:17:12.415308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.997 [2024-11-20 07:17:12.415314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.997 [2024-11-20 07:17:12.415322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.997 [2024-11-20 07:17:12.415329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.997 [2024-11-20 07:17:12.415337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.997 [2024-11-20 07:17:12.415344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.997 [2024-11-20 07:17:12.415352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.997 [2024-11-20 07:17:12.415358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.997 [2024-11-20 07:17:12.415366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.997 [2024-11-20 07:17:12.415373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.997 [2024-11-20 07:17:12.415381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.997 [2024-11-20 07:17:12.415388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.997 [2024-11-20 07:17:12.415396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.997 [2024-11-20 07:17:12.415403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.997 [2024-11-20 07:17:12.415411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.997 [2024-11-20 07:17:12.415417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.997 [2024-11-20 07:17:12.415425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.997 [2024-11-20 07:17:12.415432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.997 [2024-11-20 07:17:12.415440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.997 [2024-11-20 07:17:12.415447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.997 [2024-11-20 07:17:12.415455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.997 [2024-11-20 07:17:12.415466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.997 [2024-11-20 07:17:12.415474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.997 [2024-11-20 07:17:12.415481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.997 [2024-11-20 07:17:12.415489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.997 [2024-11-20 07:17:12.415496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.997 [2024-11-20 07:17:12.415504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.997 [2024-11-20 07:17:12.415511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.997 [2024-11-20 07:17:12.415519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.997 [2024-11-20 07:17:12.415526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.997 [2024-11-20 07:17:12.415534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.997 [2024-11-20 07:17:12.415540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.997 [2024-11-20 07:17:12.415549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.997 [2024-11-20 07:17:12.415555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.997 [2024-11-20 07:17:12.415563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.997 [2024-11-20 07:17:12.415570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.997 [2024-11-20 07:17:12.415578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.997 [2024-11-20 07:17:12.415585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.997 [2024-11-20 07:17:12.415594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.997 [2024-11-20 07:17:12.415600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.997 [2024-11-20 07:17:12.415608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.997 [2024-11-20 07:17:12.415615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.997 [2024-11-20 07:17:12.415623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.997 [2024-11-20 07:17:12.415629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.997 [2024-11-20 07:17:12.415638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.997 [2024-11-20 07:17:12.415644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.997 [2024-11-20 07:17:12.415654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.997 [2024-11-20 07:17:12.415660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.997 [2024-11-20 07:17:12.415668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.997 [2024-11-20 07:17:12.415675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.997 [2024-11-20 07:17:12.415683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.997 [2024-11-20 07:17:12.415690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.997 [2024-11-20 07:17:12.415698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.997 [2024-11-20 07:17:12.415704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.997 [2024-11-20 07:17:12.415713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.997 [2024-11-20 07:17:12.415719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.997 [2024-11-20 07:17:12.415727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.997 [2024-11-20 07:17:12.415734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.997 [2024-11-20 07:17:12.415742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.997 [2024-11-20 07:17:12.415748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.997 [2024-11-20 07:17:12.415757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.997 [2024-11-20 07:17:12.415763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.997 [2024-11-20 07:17:12.415772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.997 [2024-11-20 07:17:12.415778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.997 [2024-11-20 07:17:12.415786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.998 [2024-11-20 07:17:12.415793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.998 [2024-11-20 07:17:12.415802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.998 [2024-11-20 07:17:12.415808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.998 [2024-11-20 07:17:12.415816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.998 [2024-11-20 07:17:12.415823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.998 [2024-11-20 07:17:12.415831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.998 [2024-11-20 07:17:12.415839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.998 [2024-11-20 07:17:12.415847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.998 [2024-11-20 07:17:12.415853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.998 [2024-11-20 07:17:12.415861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.998 [2024-11-20 07:17:12.415868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.998 [2024-11-20 07:17:12.415876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.998 [2024-11-20 07:17:12.415882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.998 [2024-11-20 07:17:12.415891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.998 [2024-11-20 07:17:12.415898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.998 [2024-11-20 07:17:12.415906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.998 [2024-11-20 07:17:12.415913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.998 [2024-11-20 07:17:12.415921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.998 [2024-11-20 07:17:12.415930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.998 [2024-11-20 07:17:12.415938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.998 [2024-11-20 07:17:12.415945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.998 [2024-11-20 07:17:12.415960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.998 [2024-11-20 07:17:12.415966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.998 [2024-11-20 07:17:12.415975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.998 [2024-11-20 07:17:12.415981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.998 [2024-11-20 07:17:12.415990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.998 [2024-11-20 07:17:12.415996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.998 [2024-11-20 07:17:12.416005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.998 [2024-11-20 07:17:12.416011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.998 [2024-11-20 07:17:12.416020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.998 [2024-11-20 07:17:12.416027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.998 [2024-11-20 07:17:12.416036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f1eb0 is same with the state(6) to be set 00:21:07.998 [2024-11-20 07:17:12.416131] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:07.998 [2024-11-20 07:17:12.416175] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:07.998 [2024-11-20 07:17:12.416361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.998 [2024-11-20 07:17:12.416375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1446110 with addr=10.0.0.2, port=4420 00:21:07.998 [2024-11-20 07:17:12.416382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446110 is same with the state(6) to be set 00:21:07.998 [2024-11-20 07:17:12.416393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1417930 (9): Bad file descriptor 00:21:07.998 [2024-11-20 07:17:12.417390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:07.998 [2024-11-20 07:17:12.417407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x140d010 (9): Bad file descriptor 00:21:07.998 [2024-11-20 07:17:12.417417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1446110 (9): Bad file descriptor 00:21:07.998 [2024-11-20 07:17:12.417427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:07.998 [2024-11-20 07:17:12.417433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:07.998 [2024-11-20 07:17:12.417442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:07.998 [2024-11-20 07:17:12.417450] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:07.998 [2024-11-20 07:17:12.417503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:21:07.998 [2024-11-20 07:17:12.417511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:21:07.998 [2024-11-20 07:17:12.417517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:21:07.998 [2024-11-20 07:17:12.417523] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:21:07.998 [2024-11-20 07:17:12.417922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.998 [2024-11-20 07:17:12.417935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140d010 with addr=10.0.0.2, port=4420 00:21:07.998 [2024-11-20 07:17:12.417942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d010 is same with the state(6) to be set 00:21:07.998 [2024-11-20 07:17:12.417996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x140d010 (9): Bad file descriptor 00:21:07.998 [2024-11-20 07:17:12.418011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143d2f0 (9): Bad file descriptor 00:21:07.998 [2024-11-20 07:17:12.418030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfebd30 (9): Bad file descriptor 00:21:07.998 [2024-11-20 07:17:12.418045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe8ce0 (9): Bad file descriptor 00:21:07.998 [2024-11-20 07:17:12.418060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144a750 (9): Bad file descriptor 00:21:07.998 [2024-11-20 07:17:12.418074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x140ce30 (9): Bad file descriptor 00:21:07.998 [2024-11-20 07:17:12.418094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe01d0 (9): Bad file descriptor 00:21:07.998 [2024-11-20 07:17:12.418161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:07.998 [2024-11-20 07:17:12.418174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:07.998 [2024-11-20 07:17:12.418181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:07.998 [2024-11-20 07:17:12.418188] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:07.998 [2024-11-20 07:17:12.418231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.998 [2024-11-20 07:17:12.418239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.998 [2024-11-20 07:17:12.418251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.998 [2024-11-20 07:17:12.418258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.998 [2024-11-20 07:17:12.418267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.998 [2024-11-20 07:17:12.418273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.999 [2024-11-20 07:17:12.418282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.999 [2024-11-20 07:17:12.418288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.999 [2024-11-20 07:17:12.418297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.999 [2024-11-20 07:17:12.418304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.999 [2024-11-20 07:17:12.418312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.999 [2024-11-20 07:17:12.418318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.999 [2024-11-20 07:17:12.418326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.999 [2024-11-20 07:17:12.418333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.999 [2024-11-20 07:17:12.418341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.999 [2024-11-20 07:17:12.418348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.999 [2024-11-20 07:17:12.418356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.999 [2024-11-20 07:17:12.418363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.999 [2024-11-20 07:17:12.418371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.999 [2024-11-20 07:17:12.418378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.999 [2024-11-20 07:17:12.418386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.999 [2024-11-20 07:17:12.418392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.999 [2024-11-20 07:17:12.418403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.999 [2024-11-20 07:17:12.418410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.999 [2024-11-20 07:17:12.418421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.999 [2024-11-20 07:17:12.418428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.999 [2024-11-20 07:17:12.418436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.999 [2024-11-20 07:17:12.418443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.999 [2024-11-20 07:17:12.418451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.999 [2024-11-20 07:17:12.418457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.999 [2024-11-20 07:17:12.418466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.999 [2024-11-20 07:17:12.418472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.999 [2024-11-20 07:17:12.418480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.999 [2024-11-20 07:17:12.418487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.999 [2024-11-20 07:17:12.418495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.999 [2024-11-20 07:17:12.418502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.999 [2024-11-20 07:17:12.418510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.999 [2024-11-20 07:17:12.418517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.999 [2024-11-20 07:17:12.418525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.999 [2024-11-20 07:17:12.418532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.999 [2024-11-20 07:17:12.418540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.999 [2024-11-20 07:17:12.418546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.999 [2024-11-20 07:17:12.418555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.999 [2024-11-20 07:17:12.418561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.999 [2024-11-20 07:17:12.418569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.999 [2024-11-20 07:17:12.418576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.999 [2024-11-20 07:17:12.418584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.999 [2024-11-20 07:17:12.418592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.999 [2024-11-20 07:17:12.418601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.999 [2024-11-20 07:17:12.418607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.999 [2024-11-20 07:17:12.418615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.999 [2024-11-20 07:17:12.418622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.999 [2024-11-20 07:17:12.418630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.999 [2024-11-20 07:17:12.418642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.999 [2024-11-20 07:17:12.418651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.999 [2024-11-20 07:17:12.418657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.999 [2024-11-20 07:17:12.418665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.999 [2024-11-20 07:17:12.418672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.999 [2024-11-20 07:17:12.418680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.999 [2024-11-20 07:17:12.418687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.999 [2024-11-20 07:17:12.418695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:07.999 [2024-11-20 07:17:12.418701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.000 [2024-11-20 07:17:12.418710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.000 [2024-11-20 07:17:12.418716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.000 [2024-11-20 07:17:12.418724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.000 [2024-11-20 07:17:12.418731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.000 [2024-11-20 07:17:12.418739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.000 [2024-11-20 07:17:12.418745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.000 [2024-11-20 07:17:12.418754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.000 [2024-11-20 07:17:12.418760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.000 [2024-11-20 07:17:12.418768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.000 [2024-11-20 07:17:12.418775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.000 [2024-11-20 07:17:12.418784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.000 [2024-11-20 07:17:12.418791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.000 [2024-11-20 07:17:12.418799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.000 [2024-11-20 07:17:12.418806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.000 [2024-11-20 07:17:12.418813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.000 [2024-11-20 07:17:12.418820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.000 [2024-11-20 07:17:12.418828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.000 [2024-11-20 07:17:12.418835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.000 [2024-11-20 07:17:12.418843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.000 [2024-11-20 07:17:12.418849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.000 [2024-11-20 07:17:12.418857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.000 [2024-11-20 07:17:12.418864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.000 [2024-11-20 07:17:12.418872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.000 [2024-11-20 07:17:12.418879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.000 [2024-11-20 07:17:12.418887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.000 [2024-11-20 07:17:12.418894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.000 [2024-11-20 07:17:12.418901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.000 [2024-11-20 07:17:12.418908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.000 [2024-11-20 07:17:12.418916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.000 [2024-11-20 07:17:12.418923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.000 [2024-11-20 07:17:12.418931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.000 [2024-11-20 07:17:12.418938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.000 [2024-11-20 07:17:12.418946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.000 [2024-11-20 07:17:12.418964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.000 [2024-11-20 07:17:12.418972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.000 [2024-11-20 07:17:12.418980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.000 [2024-11-20 07:17:12.418988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.000 [2024-11-20 07:17:12.418995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.000 [2024-11-20 07:17:12.419003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.000 [2024-11-20 07:17:12.419009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.000 [2024-11-20 07:17:12.419018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.000 [2024-11-20 07:17:12.419024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.000 [2024-11-20 07:17:12.419032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.000 [2024-11-20 07:17:12.419038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.000 [2024-11-20 07:17:12.419046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.000 [2024-11-20 07:17:12.419053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.000 [2024-11-20 07:17:12.419061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.000 [2024-11-20 07:17:12.419067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.000 [2024-11-20 07:17:12.419076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.000 [2024-11-20 07:17:12.419082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.000 [2024-11-20 07:17:12.419091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.000 [2024-11-20 07:17:12.419097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.000 [2024-11-20 07:17:12.419105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.000 [2024-11-20 07:17:12.419112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.000 [2024-11-20 07:17:12.419120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.000 [2024-11-20 07:17:12.419126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.000 [2024-11-20 07:17:12.419134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.000 [2024-11-20 07:17:12.419141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.000 [2024-11-20 07:17:12.419149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.000 [2024-11-20 07:17:12.419155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.000 [2024-11-20 07:17:12.419164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.000 [2024-11-20 07:17:12.419171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.000 [2024-11-20 07:17:12.419180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.000 [2024-11-20 07:17:12.419186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.001 [2024-11-20 07:17:12.419194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.001 [2024-11-20 07:17:12.419201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.001 [2024-11-20 07:17:12.419208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f03a0 is same with the state(6) to be set 00:21:08.001 [2024-11-20 07:17:12.420215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:08.001 [2024-11-20 07:17:12.420488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:08.001 [2024-11-20 07:17:12.420502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfec1b0 with addr=10.0.0.2, port=4420 00:21:08.001 [2024-11-20 07:17:12.420510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfec1b0 is same with the state(6) to be set 00:21:08.001 [2024-11-20 07:17:12.420766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfec1b0 (9): Bad file descriptor 00:21:08.001 [2024-11-20 07:17:12.420804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:08.001 [2024-11-20 07:17:12.420811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:08.001 [2024-11-20 07:17:12.420819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:08.001 [2024-11-20 07:17:12.420825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:08.001 [2024-11-20 07:17:12.423959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:08.001 [2024-11-20 07:17:12.424245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:08.001 [2024-11-20 07:17:12.424259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1417930 with addr=10.0.0.2, port=4420 00:21:08.001 [2024-11-20 07:17:12.424267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1417930 is same with the state(6) to be set 00:21:08.001 [2024-11-20 07:17:12.424297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1417930 (9): Bad file descriptor 00:21:08.001 [2024-11-20 07:17:12.424328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:08.001 [2024-11-20 07:17:12.424335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:08.001 [2024-11-20 07:17:12.424341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:08.001 [2024-11-20 07:17:12.424347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:08.001 [2024-11-20 07:17:12.425009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:21:08.001 [2024-11-20 07:17:12.425207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:08.001 [2024-11-20 07:17:12.425220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1446110 with addr=10.0.0.2, port=4420 00:21:08.001 [2024-11-20 07:17:12.425227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446110 is same with the state(6) to be set 00:21:08.001 [2024-11-20 07:17:12.425261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1446110 (9): Bad file descriptor 00:21:08.001 [2024-11-20 07:17:12.425291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:21:08.001 [2024-11-20 07:17:12.425298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:21:08.001 [2024-11-20 07:17:12.425304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:21:08.001 [2024-11-20 07:17:12.425310] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:21:08.001 [2024-11-20 07:17:12.427597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:08.001 [2024-11-20 07:17:12.427852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:08.001 [2024-11-20 07:17:12.427866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140d010 with addr=10.0.0.2, port=4420 00:21:08.001 [2024-11-20 07:17:12.427873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d010 is same with the state(6) to be set 00:21:08.001 [2024-11-20 07:17:12.427903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x140d010 (9): Bad file descriptor 00:21:08.001 [2024-11-20 07:17:12.427933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:08.001 [2024-11-20 07:17:12.427940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:08.001 [2024-11-20 07:17:12.427953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:08.001 [2024-11-20 07:17:12.427960] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:08.001 [2024-11-20 07:17:12.428128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.001 [2024-11-20 07:17:12.428140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.001 [2024-11-20 07:17:12.428151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.001 [2024-11-20 07:17:12.428158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.001 [2024-11-20 07:17:12.428167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.001 [2024-11-20 07:17:12.428174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.001 [2024-11-20 07:17:12.428182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.001 [2024-11-20 07:17:12.428189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.001 [2024-11-20 07:17:12.428197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.001 [2024-11-20 07:17:12.428204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.001 [2024-11-20 07:17:12.428213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.001 [2024-11-20 07:17:12.428220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.001 [2024-11-20 07:17:12.428228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.001 [2024-11-20 07:17:12.428238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.001 [2024-11-20 07:17:12.428247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.001 [2024-11-20 07:17:12.428254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.001 [2024-11-20 07:17:12.428262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.001 [2024-11-20 07:17:12.428268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.001 [2024-11-20 07:17:12.428277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.001 [2024-11-20 07:17:12.428283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.001 [2024-11-20 07:17:12.428292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.001 [2024-11-20 07:17:12.428298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.001 [2024-11-20 07:17:12.428306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.001 [2024-11-20 07:17:12.428313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.001 [2024-11-20 07:17:12.428321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.001 [2024-11-20 07:17:12.428328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.001 [2024-11-20 07:17:12.428336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.001 [2024-11-20 07:17:12.428343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.001 [2024-11-20 07:17:12.428351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.001 [2024-11-20 07:17:12.428358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.002 [2024-11-20 07:17:12.428366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.002 [2024-11-20 07:17:12.428373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.002 [2024-11-20 07:17:12.428381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.002 [2024-11-20 07:17:12.428387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.002 [2024-11-20 07:17:12.428396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.002 [2024-11-20 07:17:12.428402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.002 [2024-11-20 07:17:12.428410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.002 [2024-11-20 07:17:12.428417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.002 [2024-11-20 07:17:12.428427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.002 [2024-11-20 07:17:12.428434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.002 [2024-11-20 07:17:12.428442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.002 [2024-11-20 07:17:12.428449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.002 [2024-11-20 07:17:12.428457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.002 [2024-11-20 07:17:12.428464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.002 [2024-11-20 07:17:12.428472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.002 [2024-11-20 07:17:12.428478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.002 [2024-11-20 07:17:12.428487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.002 [2024-11-20 07:17:12.428494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.002 [2024-11-20 07:17:12.428503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.002 [2024-11-20 07:17:12.428509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.002 [2024-11-20 07:17:12.428520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.002 [2024-11-20 07:17:12.428527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.002 [2024-11-20 07:17:12.428535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.002 [2024-11-20 07:17:12.428542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.002 [2024-11-20 07:17:12.428550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.002 [2024-11-20 07:17:12.428557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.002 [2024-11-20 07:17:12.428567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.002 [2024-11-20 07:17:12.428574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.002 [2024-11-20 07:17:12.428583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.002 [2024-11-20 07:17:12.428589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.002 [2024-11-20 07:17:12.428598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.002 [2024-11-20 07:17:12.428604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.002 [2024-11-20 07:17:12.428613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.002 [2024-11-20 07:17:12.428622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.002 [2024-11-20 07:17:12.428630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.002 [2024-11-20 07:17:12.428637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.002 [2024-11-20 07:17:12.428645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.002 [2024-11-20 07:17:12.428652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.002 [2024-11-20 07:17:12.428660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.002 [2024-11-20 07:17:12.428667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.002 [2024-11-20 07:17:12.428675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.002 [2024-11-20 07:17:12.428681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.002 [2024-11-20 07:17:12.428689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.002 [2024-11-20 07:17:12.428696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.002 [2024-11-20 07:17:12.428704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.002 [2024-11-20 07:17:12.428710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.002 [2024-11-20 07:17:12.428719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.002 [2024-11-20 07:17:12.428725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.002 [2024-11-20 07:17:12.428733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.002 [2024-11-20 07:17:12.428740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.002 [2024-11-20 07:17:12.428748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.002 [2024-11-20 07:17:12.428754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.002 [2024-11-20 07:17:12.428763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.002 [2024-11-20 07:17:12.428769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.002 [2024-11-20 07:17:12.428777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.002 [2024-11-20 07:17:12.428783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.002 [2024-11-20 07:17:12.428792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.002 [2024-11-20 07:17:12.428799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.002 [2024-11-20 07:17:12.428809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.002 [2024-11-20 07:17:12.428815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.002 [2024-11-20 07:17:12.428823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.002 [2024-11-20 07:17:12.428831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.002 [2024-11-20 07:17:12.428839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.002 [2024-11-20 07:17:12.428846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.002 [2024-11-20 07:17:12.428854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.003 [2024-11-20 07:17:12.428861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.003 [2024-11-20 07:17:12.428870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.003 [2024-11-20 07:17:12.428876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.003 [2024-11-20 07:17:12.428884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.003 [2024-11-20 07:17:12.428891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.003 [2024-11-20 07:17:12.428900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.003 [2024-11-20 07:17:12.428906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.003 [2024-11-20 07:17:12.428915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.003 [2024-11-20 07:17:12.428922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.003 [2024-11-20 07:17:12.428930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.003 [2024-11-20 07:17:12.428936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.003 [2024-11-20 07:17:12.428944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.003 [2024-11-20 07:17:12.428956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.003 [2024-11-20 07:17:12.428964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.003 [2024-11-20 07:17:12.428971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.003 [2024-11-20 07:17:12.428979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.003 [2024-11-20 07:17:12.428986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.003 [2024-11-20 07:17:12.428994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.003 [2024-11-20 07:17:12.429006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.003 [2024-11-20 07:17:12.429014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.003 [2024-11-20 07:17:12.429021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.003 [2024-11-20 07:17:12.429029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.003 [2024-11-20 07:17:12.429035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.003 [2024-11-20 07:17:12.429044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.003 [2024-11-20 07:17:12.429052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.003 [2024-11-20 07:17:12.429061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.003 [2024-11-20 07:17:12.429068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.003 [2024-11-20 07:17:12.429076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.003 [2024-11-20 07:17:12.429084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.003 [2024-11-20 07:17:12.429092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.003 [2024-11-20 07:17:12.429099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.003 [2024-11-20 07:17:12.429107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.003 [2024-11-20 07:17:12.429114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.003 [2024-11-20 07:17:12.429121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f15e0 is same with the state(6) to be set 00:21:08.003 [2024-11-20 07:17:12.430165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.003 [2024-11-20 07:17:12.430181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.003 [2024-11-20 07:17:12.430192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.003 [2024-11-20 07:17:12.430200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.003 [2024-11-20 07:17:12.430208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.003 [2024-11-20 07:17:12.430215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.003 [2024-11-20 07:17:12.430224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.003 [2024-11-20 07:17:12.430232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.003 [2024-11-20 07:17:12.430240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.003 [2024-11-20 07:17:12.430249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.003 [2024-11-20 07:17:12.430257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.003 [2024-11-20 07:17:12.430264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.003 [2024-11-20 07:17:12.430272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.003 [2024-11-20 07:17:12.430279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.003 [2024-11-20 07:17:12.430287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.003 [2024-11-20 07:17:12.430295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.003 [2024-11-20 07:17:12.430303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.003 [2024-11-20 07:17:12.430310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.003 [2024-11-20 07:17:12.430319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.003 [2024-11-20 07:17:12.430326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.003 [2024-11-20 07:17:12.430334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.003 [2024-11-20 07:17:12.430341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.003 [2024-11-20 07:17:12.430350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.003 [2024-11-20 07:17:12.430356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.003 [2024-11-20 07:17:12.430364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.003 [2024-11-20 07:17:12.430372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.003 [2024-11-20 07:17:12.430380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.003 [2024-11-20 07:17:12.430387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.003 [2024-11-20 07:17:12.430396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.003 [2024-11-20 07:17:12.430403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.004 [2024-11-20 07:17:12.430411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.004 [2024-11-20 07:17:12.430418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.004 [2024-11-20 07:17:12.430426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.004 [2024-11-20 07:17:12.430433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.004 [2024-11-20 07:17:12.430443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.004 [2024-11-20 07:17:12.430450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.004 [2024-11-20 07:17:12.430459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.004 [2024-11-20 07:17:12.430465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.004 [2024-11-20 07:17:12.430473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.004 [2024-11-20 07:17:12.430480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.004 [2024-11-20 07:17:12.430489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.004 [2024-11-20 07:17:12.430496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.004 [2024-11-20 07:17:12.430504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.004 [2024-11-20 07:17:12.430510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.004 [2024-11-20 07:17:12.430518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.004 [2024-11-20 07:17:12.430525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.004 [2024-11-20 07:17:12.430533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.004 [2024-11-20 07:17:12.430540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.004 [2024-11-20 07:17:12.430548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.004 [2024-11-20 07:17:12.430555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.004 [2024-11-20 07:17:12.430563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.004 [2024-11-20 07:17:12.430570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.004 [2024-11-20 07:17:12.430578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.004 [2024-11-20 07:17:12.430584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.004 [2024-11-20 07:17:12.430592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.004 [2024-11-20 07:17:12.430599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.004 [2024-11-20 07:17:12.430607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.004 [2024-11-20 07:17:12.430614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.004 [2024-11-20 07:17:12.430622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.004 [2024-11-20 07:17:12.430630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.004 [2024-11-20 07:17:12.430639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.004 [2024-11-20 07:17:12.430645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.004 [2024-11-20 07:17:12.430654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.004 [2024-11-20 07:17:12.430660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.004 [2024-11-20 07:17:12.430668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.004 [2024-11-20 07:17:12.430676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.004 [2024-11-20 07:17:12.430684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.004 [2024-11-20 07:17:12.430691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.004 [2024-11-20 07:17:12.430698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c21d0 is same with the state(6) to be set 00:21:08.004 [2024-11-20 07:17:12.431599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.004 [2024-11-20 07:17:12.431614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.004 [2024-11-20 07:17:12.431624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.004 [2024-11-20 07:17:12.431632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.004 [2024-11-20 07:17:12.431640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.004 [2024-11-20 07:17:12.431647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.004 [2024-11-20 07:17:12.431656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.004 [2024-11-20 07:17:12.431662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.004 [2024-11-20 07:17:12.431671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.004 [2024-11-20 07:17:12.431678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.004 [2024-11-20 07:17:12.431686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.004 [2024-11-20 07:17:12.431693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.004 [2024-11-20 07:17:12.431701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.004 [2024-11-20 07:17:12.431708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.004 [2024-11-20 07:17:12.431717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.004 [2024-11-20 07:17:12.431726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.004 [2024-11-20 07:17:12.431734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.004 [2024-11-20 07:17:12.431741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.005 [2024-11-20 07:17:12.431749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.005 [2024-11-20 07:17:12.431757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.005 [2024-11-20 07:17:12.431765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.005 [2024-11-20 07:17:12.431772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.005 [2024-11-20 07:17:12.431780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.005 [2024-11-20 07:17:12.431787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.005 [2024-11-20 07:17:12.431795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.005 [2024-11-20 07:17:12.431802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.005 [2024-11-20 07:17:12.431810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.005 [2024-11-20 07:17:12.431817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.005 [2024-11-20 07:17:12.431825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.005 [2024-11-20 07:17:12.431832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.005 [2024-11-20 07:17:12.431840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.005 [2024-11-20 07:17:12.431847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.005 [2024-11-20 07:17:12.431855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.005 [2024-11-20 07:17:12.431862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.005 [2024-11-20 07:17:12.431870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.005 [2024-11-20 07:17:12.431877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.005 [2024-11-20 07:17:12.431885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.005 [2024-11-20 07:17:12.431892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.005 [2024-11-20 07:17:12.431900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.005 [2024-11-20 07:17:12.431907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.005 [2024-11-20 07:17:12.431915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.005 [2024-11-20 07:17:12.431923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.005 [2024-11-20 07:17:12.431932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.005 [2024-11-20 07:17:12.431938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.005 [2024-11-20 07:17:12.431946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.005 [2024-11-20 07:17:12.431957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.005 [2024-11-20 07:17:12.431966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.005 [2024-11-20 07:17:12.431972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.005 [2024-11-20 07:17:12.431981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.005 [2024-11-20 07:17:12.431988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.005 [2024-11-20 07:17:12.431996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.005 [2024-11-20 07:17:12.432003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.005 [2024-11-20 07:17:12.432012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.005 [2024-11-20 07:17:12.432018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.005 [2024-11-20 07:17:12.432027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.005 [2024-11-20 07:17:12.432033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.005 [2024-11-20 07:17:12.432042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.005 [2024-11-20 07:17:12.432049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.005 [2024-11-20 07:17:12.432057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.005 [2024-11-20 07:17:12.432064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.005 [2024-11-20 07:17:12.432072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.005 [2024-11-20 07:17:12.432079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.005 [2024-11-20 07:17:12.432087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.005 [2024-11-20 07:17:12.432094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.005 [2024-11-20 07:17:12.432102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.005 [2024-11-20 07:17:12.432108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.005 [2024-11-20 07:17:12.432118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.005 [2024-11-20 07:17:12.432125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.005 [2024-11-20 07:17:12.432133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.005 [2024-11-20 07:17:12.432139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.005 [2024-11-20 07:17:12.432148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.005 [2024-11-20 07:17:12.432154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.005 [2024-11-20 07:17:12.432162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.005 [2024-11-20 07:17:12.432169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.005 [2024-11-20 07:17:12.432177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.005 [2024-11-20 07:17:12.432184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.005 [2024-11-20 07:17:12.432192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.005 [2024-11-20 07:17:12.432199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.005 [2024-11-20 07:17:12.432207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.005 [2024-11-20 07:17:12.432214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.005 [2024-11-20 07:17:12.432224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.005 [2024-11-20 07:17:12.432231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.005 [2024-11-20 07:17:12.432239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.006 [2024-11-20 07:17:12.432246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.006 [2024-11-20 07:17:12.432254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.006 [2024-11-20 07:17:12.432261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.006 [2024-11-20 07:17:12.432270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.006 [2024-11-20 07:17:12.432276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.006 [2024-11-20 07:17:12.432285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.006 [2024-11-20 07:17:12.432292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.006 [2024-11-20 07:17:12.432300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.006 [2024-11-20 07:17:12.432308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.006 [2024-11-20 07:17:12.432316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.006 [2024-11-20 07:17:12.432323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.006 [2024-11-20 07:17:12.432331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.006 [2024-11-20 07:17:12.432338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.006 [2024-11-20 07:17:12.432347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.006 [2024-11-20 07:17:12.432354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.006 [2024-11-20 07:17:12.432363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.006 [2024-11-20 07:17:12.432370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.006 [2024-11-20 07:17:12.432379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.006 [2024-11-20 07:17:12.432386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.006 [2024-11-20 07:17:12.432395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.006 [2024-11-20 07:17:12.432401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.006 [2024-11-20 07:17:12.432410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.006 [2024-11-20 07:17:12.432417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.006 [2024-11-20 07:17:12.432425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.006 [2024-11-20 07:17:12.432432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.006 [2024-11-20 07:17:12.432440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.006 [2024-11-20 07:17:12.432447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.006 [2024-11-20 07:17:12.432455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.006 [2024-11-20 07:17:12.432462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.006 [2024-11-20 07:17:12.432470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.006 [2024-11-20 07:17:12.432477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.006 [2024-11-20 07:17:12.432485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.006 [2024-11-20 07:17:12.432492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.006 [2024-11-20 07:17:12.432502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.006 [2024-11-20 07:17:12.432509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.006 [2024-11-20 07:17:12.432517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.006 [2024-11-20 07:17:12.432524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.006 [2024-11-20 07:17:12.432533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.006 [2024-11-20 07:17:12.432539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.006 [2024-11-20 07:17:12.432547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.006 [2024-11-20 07:17:12.432554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.006 [2024-11-20 07:17:12.432562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.006 [2024-11-20 07:17:12.432569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.006 [2024-11-20 07:17:12.432577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.006 [2024-11-20 07:17:12.432584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.006 [2024-11-20 07:17:12.432591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ef4f0 is same with the state(6) to be set 00:21:08.006 [2024-11-20 07:17:12.433587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.006 [2024-11-20 07:17:12.433600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.006 [2024-11-20 07:17:12.433611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.006 [2024-11-20 07:17:12.433618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.006 [2024-11-20 07:17:12.433627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.006 [2024-11-20 07:17:12.433634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.006 [2024-11-20 07:17:12.433642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.006 [2024-11-20 07:17:12.433648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.006 [2024-11-20 07:17:12.433657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.006 [2024-11-20 07:17:12.433663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.006 [2024-11-20 07:17:12.433672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.006 [2024-11-20 07:17:12.433679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.006 [2024-11-20 07:17:12.433691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.006 [2024-11-20 07:17:12.433697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.006 [2024-11-20 07:17:12.433705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.006 [2024-11-20 07:17:12.433712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.006 [2024-11-20 07:17:12.433721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.006 [2024-11-20 07:17:12.433728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.007 [2024-11-20 07:17:12.433736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.007 [2024-11-20 07:17:12.433743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.007 [2024-11-20 07:17:12.433751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.007 [2024-11-20 07:17:12.433758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.007 [2024-11-20 07:17:12.433766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.007 [2024-11-20 07:17:12.433773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.007 [2024-11-20 07:17:12.433782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.007 [2024-11-20 07:17:12.433789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.007 [2024-11-20 07:17:12.433798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.007 [2024-11-20 07:17:12.433805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.007 [2024-11-20 07:17:12.433814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.007 [2024-11-20 07:17:12.433820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.007 [2024-11-20 07:17:12.433829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.007 [2024-11-20 07:17:12.433835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.007 [2024-11-20 07:17:12.433844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.007 [2024-11-20 07:17:12.433851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.007 [2024-11-20 07:17:12.433860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.007 [2024-11-20 07:17:12.433866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.007 [2024-11-20 07:17:12.433875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.007 [2024-11-20 07:17:12.433883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.007 [2024-11-20 07:17:12.433891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.007 [2024-11-20 07:17:12.433898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.007 [2024-11-20 07:17:12.433907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.007 [2024-11-20 07:17:12.433914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.007 [2024-11-20 07:17:12.433922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.007 [2024-11-20 07:17:12.433929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.007 [2024-11-20 07:17:12.433937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.007 [2024-11-20 07:17:12.433944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.007 [2024-11-20 07:17:12.433956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.007 [2024-11-20 07:17:12.433962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.007 [2024-11-20 07:17:12.433971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.007 [2024-11-20 07:17:12.433978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.007 [2024-11-20 07:17:12.433986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.007 [2024-11-20 07:17:12.433993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.007 [2024-11-20 07:17:12.434001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.007 [2024-11-20 07:17:12.434008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.007 [2024-11-20 07:17:12.434016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.007 [2024-11-20 07:17:12.434022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.007 [2024-11-20 07:17:12.434031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.007 [2024-11-20 07:17:12.434038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.007 [2024-11-20 07:17:12.434046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.007 [2024-11-20 07:17:12.434053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.007 [2024-11-20 07:17:12.434061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.007 [2024-11-20 07:17:12.434068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.007 [2024-11-20 07:17:12.434078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.007 [2024-11-20 07:17:12.434085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.007 [2024-11-20 07:17:12.434094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.007 [2024-11-20 07:17:12.434100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.007 [2024-11-20 07:17:12.434109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.007 [2024-11-20 07:17:12.434116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.007 [2024-11-20 07:17:12.434124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.007 [2024-11-20 07:17:12.434130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.007 [2024-11-20 07:17:12.434138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.007 [2024-11-20 07:17:12.434145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.007 [2024-11-20 07:17:12.434153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.007 [2024-11-20 07:17:12.434160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.007 [2024-11-20 07:17:12.434168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.007 [2024-11-20 07:17:12.434175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.007 [2024-11-20 07:17:12.434183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.007 [2024-11-20 07:17:12.434190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.007 [2024-11-20 07:17:12.434198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.007 [2024-11-20 07:17:12.434204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.007 [2024-11-20 07:17:12.434212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.007 [2024-11-20 07:17:12.434219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.007 [2024-11-20 07:17:12.434227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.008 [2024-11-20 07:17:12.434234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.008 [2024-11-20 07:17:12.434242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.008 [2024-11-20 07:17:12.434249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.008 [2024-11-20 07:17:12.434257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.008 [2024-11-20 07:17:12.434266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.008 [2024-11-20 07:17:12.434274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.008 [2024-11-20 07:17:12.434281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.008 [2024-11-20 07:17:12.434289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.008 [2024-11-20 07:17:12.434296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.008 [2024-11-20 07:17:12.434304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.008 [2024-11-20 07:17:12.434311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.008 [2024-11-20 07:17:12.434319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.008 [2024-11-20 07:17:12.434326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.008 [2024-11-20 07:17:12.434334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.008 [2024-11-20 07:17:12.434341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.008 [2024-11-20 07:17:12.434349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.008 [2024-11-20 07:17:12.434355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.008 [2024-11-20 07:17:12.434363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.008 [2024-11-20 07:17:12.434370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.008 [2024-11-20 07:17:12.434378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.008 [2024-11-20 07:17:12.434385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.008 [2024-11-20 07:17:12.434393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.008 [2024-11-20 07:17:12.434400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.008 [2024-11-20 07:17:12.434408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.008 [2024-11-20 07:17:12.434415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.008 [2024-11-20 07:17:12.434423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.008 [2024-11-20 07:17:12.434430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.008 [2024-11-20 07:17:12.434438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.008 [2024-11-20 07:17:12.434444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.008 [2024-11-20 07:17:12.434454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.008 [2024-11-20 07:17:12.434460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.008 [2024-11-20 07:17:12.434468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.008 [2024-11-20 07:17:12.434475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.008 [2024-11-20 07:17:12.434483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.008 [2024-11-20 07:17:12.434490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.008 [2024-11-20 07:17:12.434498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.008 [2024-11-20 07:17:12.434505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.008 [2024-11-20 07:17:12.434513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.008 [2024-11-20 07:17:12.434519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.008 [2024-11-20 07:17:12.434527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.008 [2024-11-20 07:17:12.434535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.008 [2024-11-20 07:17:12.434543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.008 [2024-11-20 07:17:12.434550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.008 [2024-11-20 07:17:12.434558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.008 [2024-11-20 07:17:12.434564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.008 [2024-11-20 07:17:12.434572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f3450 is same with the state(6) to be set 00:21:08.008 [2024-11-20 07:17:12.435586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.008 [2024-11-20 07:17:12.435598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.008 [2024-11-20 07:17:12.435610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.008 [2024-11-20 07:17:12.435617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.008 [2024-11-20 07:17:12.435625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.008 [2024-11-20 07:17:12.435633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.008 [2024-11-20 07:17:12.435641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.008 [2024-11-20 07:17:12.435648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.008 [2024-11-20 07:17:12.435659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.008 [2024-11-20 07:17:12.435666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.008 [2024-11-20 07:17:12.435674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.008 [2024-11-20 07:17:12.435682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.008 [2024-11-20 07:17:12.435691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.008 [2024-11-20 07:17:12.435698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.008 [2024-11-20 07:17:12.435706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.008 [2024-11-20 07:17:12.435713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.008 [2024-11-20 07:17:12.435721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.008 [2024-11-20 07:17:12.435727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.009 [2024-11-20 07:17:12.435736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.009 [2024-11-20 07:17:12.435742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.009 [2024-11-20 07:17:12.435751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.009 [2024-11-20 07:17:12.435757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.009 [2024-11-20 07:17:12.435766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.009 [2024-11-20 07:17:12.435773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.009 [2024-11-20 07:17:12.435781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.009 [2024-11-20 07:17:12.435788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.009 [2024-11-20 07:17:12.435797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.009 [2024-11-20 07:17:12.435804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.009 [2024-11-20 07:17:12.435812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.009 [2024-11-20 07:17:12.435819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.009 [2024-11-20 07:17:12.435827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.009 [2024-11-20 07:17:12.435834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.009 [2024-11-20 07:17:12.435842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.009 [2024-11-20 07:17:12.435853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.009 [2024-11-20 07:17:12.435861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.009 [2024-11-20 07:17:12.435868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.009 [2024-11-20 07:17:12.435876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.009 [2024-11-20 07:17:12.435883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.009 [2024-11-20 07:17:12.435891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.009 [2024-11-20 07:17:12.435898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.009 [2024-11-20 07:17:12.435906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.009 [2024-11-20 07:17:12.435914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.009 [2024-11-20 07:17:12.435922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.009 [2024-11-20 07:17:12.435929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.009 [2024-11-20 07:17:12.435938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.009 [2024-11-20 07:17:12.435944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.009 [2024-11-20 07:17:12.435958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.009 [2024-11-20 07:17:12.435964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.009 [2024-11-20 07:17:12.435973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.009 [2024-11-20 07:17:12.435980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.009 [2024-11-20 07:17:12.435989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.009 [2024-11-20 07:17:12.435996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.009 [2024-11-20 07:17:12.436004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.009 [2024-11-20 07:17:12.436011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.009 [2024-11-20 07:17:12.436019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.009 [2024-11-20 07:17:12.436026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.009 [2024-11-20 07:17:12.436034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.009 [2024-11-20 07:17:12.436040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.009 [2024-11-20 07:17:12.436050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.009 [2024-11-20 07:17:12.436058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.009 [2024-11-20 07:17:12.436066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.009 [2024-11-20 07:17:12.436073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.009 [2024-11-20 07:17:12.436081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.009 [2024-11-20 07:17:12.436093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.009 [2024-11-20 07:17:12.436101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.009 [2024-11-20 07:17:12.436108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.009 [2024-11-20 07:17:12.436116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.009 [2024-11-20 07:17:12.436122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.009 [2024-11-20 07:17:12.436131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.009 [2024-11-20 07:17:12.436138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.009 [2024-11-20 07:17:12.436146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.009 [2024-11-20 07:17:12.436153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.009 [2024-11-20 07:17:12.436161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.009 [2024-11-20 07:17:12.436167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.009 [2024-11-20 07:17:12.436175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.009 [2024-11-20 07:17:12.436182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.009 [2024-11-20 07:17:12.436190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.009 [2024-11-20 07:17:12.436197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.009 [2024-11-20 07:17:12.436205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.009 [2024-11-20 07:17:12.436212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.009 [2024-11-20 07:17:12.436220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.009 [2024-11-20 07:17:12.436227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.009 [2024-11-20 07:17:12.436235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.010 [2024-11-20 07:17:12.436243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.010 [2024-11-20 07:17:12.436251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.010 [2024-11-20 07:17:12.436258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.010 [2024-11-20 07:17:12.436266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.010 [2024-11-20 07:17:12.436273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.010 [2024-11-20 07:17:12.436281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.010 [2024-11-20 07:17:12.436288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.010 [2024-11-20 07:17:12.436296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.010 [2024-11-20 07:17:12.436303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.010 [2024-11-20 07:17:12.436311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.010 [2024-11-20 07:17:12.436318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.010 [2024-11-20 07:17:12.436326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.010 [2024-11-20 07:17:12.436332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.010 [2024-11-20 07:17:12.436341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.010 [2024-11-20 07:17:12.436347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.010 [2024-11-20 07:17:12.436356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.010 [2024-11-20 07:17:12.436362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.010 [2024-11-20 07:17:12.436370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.010 [2024-11-20 07:17:12.436377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.010 [2024-11-20 07:17:12.436385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.010 [2024-11-20 07:17:12.436392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.010 [2024-11-20 07:17:12.436400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.010 [2024-11-20 07:17:12.436407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.010 [2024-11-20 07:17:12.436415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.010 [2024-11-20 07:17:12.436422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.010 [2024-11-20 07:17:12.436431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.010 [2024-11-20 07:17:12.436438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.010 [2024-11-20 07:17:12.436446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.010 [2024-11-20 07:17:12.436453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.010 [2024-11-20 07:17:12.436462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.010 [2024-11-20 07:17:12.436469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.010 [2024-11-20 07:17:12.436477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.010 [2024-11-20 07:17:12.436484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.010 [2024-11-20 07:17:12.436493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.010 [2024-11-20 07:17:12.436499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.010 [2024-11-20 07:17:12.436507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.010 [2024-11-20 07:17:12.436514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.010 [2024-11-20 07:17:12.436522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.010 [2024-11-20 07:17:12.436529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.010 [2024-11-20 07:17:12.436537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.010 [2024-11-20 07:17:12.436544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.010 [2024-11-20 07:17:12.436552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.010 [2024-11-20 07:17:12.436559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.010 [2024-11-20 07:17:12.436567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.010 [2024-11-20 07:17:12.436574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.010 [2024-11-20 07:17:12.436582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233afc0 is same with the state(6) to be set 00:21:08.010 [2024-11-20 07:17:12.437586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.010 [2024-11-20 07:17:12.437599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.010 [2024-11-20 07:17:12.437609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.010 [2024-11-20 07:17:12.437617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.010 [2024-11-20 07:17:12.437625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.010 [2024-11-20 07:17:12.437634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.010 [2024-11-20 07:17:12.437643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.010 [2024-11-20 07:17:12.437650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.010 [2024-11-20 07:17:12.437658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.010 [2024-11-20 07:17:12.437665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.010 [2024-11-20 07:17:12.437674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.011 [2024-11-20 07:17:12.437681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.011 [2024-11-20 07:17:12.437689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.011 [2024-11-20 07:17:12.437696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.011 [2024-11-20 07:17:12.437705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.011 [2024-11-20 07:17:12.437712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.011 [2024-11-20 07:17:12.437720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.011 [2024-11-20 07:17:12.437727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.011 [2024-11-20 07:17:12.437736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.011 [2024-11-20 07:17:12.437743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.011 [2024-11-20 07:17:12.437751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.011 [2024-11-20 07:17:12.437758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.011 [2024-11-20 07:17:12.437767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.011 [2024-11-20 07:17:12.437773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.011 [2024-11-20 07:17:12.437781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.011 [2024-11-20 07:17:12.437789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.011 [2024-11-20 07:17:12.437797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.011 [2024-11-20 07:17:12.437804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.011 [2024-11-20 07:17:12.437812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.011 [2024-11-20 07:17:12.437819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.011 [2024-11-20 07:17:12.437830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.011 [2024-11-20 07:17:12.437837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.011 [2024-11-20 07:17:12.437845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.011 [2024-11-20 07:17:12.437852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.011 [2024-11-20 07:17:12.437861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.011 [2024-11-20 07:17:12.437868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.011 [2024-11-20 07:17:12.437876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.011 [2024-11-20 07:17:12.437883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.011 [2024-11-20 07:17:12.437891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.011 [2024-11-20 07:17:12.437898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.011 [2024-11-20 07:17:12.437906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.011 [2024-11-20 07:17:12.437913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.011 [2024-11-20 07:17:12.437921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.011 [2024-11-20 07:17:12.437928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.011 [2024-11-20 07:17:12.437936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.011 [2024-11-20 07:17:12.437943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.011 [2024-11-20 07:17:12.437954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.011 [2024-11-20 07:17:12.437961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.011 [2024-11-20 07:17:12.437969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.011 [2024-11-20 07:17:12.437976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.011 [2024-11-20 07:17:12.437984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.011 [2024-11-20 07:17:12.437991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.011 [2024-11-20 07:17:12.437999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.011 [2024-11-20 07:17:12.438006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.011 [2024-11-20 07:17:12.438015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.011 [2024-11-20 07:17:12.438024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.011 [2024-11-20 07:17:12.438032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.011 [2024-11-20 07:17:12.438038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.011 [2024-11-20 07:17:12.438047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.011 [2024-11-20 07:17:12.438053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.011 [2024-11-20 07:17:12.438062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.011 [2024-11-20 07:17:12.438069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.011 [2024-11-20 07:17:12.438077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.011 [2024-11-20 07:17:12.438083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.011 [2024-11-20 07:17:12.438091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.011 [2024-11-20 07:17:12.438098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.011 [2024-11-20 07:17:12.438106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.011 [2024-11-20 07:17:12.438112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.011 [2024-11-20 07:17:12.438120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.011 [2024-11-20 07:17:12.438127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.011 [2024-11-20 07:17:12.438135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.011 [2024-11-20 07:17:12.438141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.011 [2024-11-20 07:17:12.438150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.011 [2024-11-20 07:17:12.438156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.011 [2024-11-20 07:17:12.438164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.011 [2024-11-20 07:17:12.438171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.012 [2024-11-20 07:17:12.438180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.012 [2024-11-20 07:17:12.438186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.012 [2024-11-20 07:17:12.438194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.012 [2024-11-20 07:17:12.438201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.012 [2024-11-20 07:17:12.438210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.012 [2024-11-20 07:17:12.438217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.012 [2024-11-20 07:17:12.438225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.012 [2024-11-20 07:17:12.438231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.012 [2024-11-20 07:17:12.438239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.012 [2024-11-20 07:17:12.438246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.012 [2024-11-20 07:17:12.438254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.012 [2024-11-20 07:17:12.438261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.012 [2024-11-20 07:17:12.438269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.012 [2024-11-20 07:17:12.438276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.012 [2024-11-20 07:17:12.438284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.012 [2024-11-20 07:17:12.438290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.012 [2024-11-20 07:17:12.438299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.012 [2024-11-20 07:17:12.438305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.012 [2024-11-20 07:17:12.438313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.012 [2024-11-20 07:17:12.438320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.012 [2024-11-20 07:17:12.438328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.012 [2024-11-20 07:17:12.438334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.012 [2024-11-20 07:17:12.438342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.012 [2024-11-20 07:17:12.438349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.012 [2024-11-20 07:17:12.438357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.012 [2024-11-20 07:17:12.438363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.012 [2024-11-20 07:17:12.438372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.012 [2024-11-20 07:17:12.438378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.012 [2024-11-20 07:17:12.438386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.012 [2024-11-20 07:17:12.438394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.012 [2024-11-20 07:17:12.438403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.012 [2024-11-20 07:17:12.438409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.012 [2024-11-20 07:17:12.438417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.012 [2024-11-20 07:17:12.438424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.012 [2024-11-20 07:17:12.438432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.012 [2024-11-20 07:17:12.438438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.012 [2024-11-20 07:17:12.438446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.012 [2024-11-20 07:17:12.438453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.012 [2024-11-20 07:17:12.438461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.012 [2024-11-20 07:17:12.438467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.012 [2024-11-20 07:17:12.438476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.012 [2024-11-20 07:17:12.438482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.012 [2024-11-20 07:17:12.438490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.012 [2024-11-20 07:17:12.438497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.012 [2024-11-20 07:17:12.438505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.012 [2024-11-20 07:17:12.438511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.012 [2024-11-20 07:17:12.438519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.012 [2024-11-20 07:17:12.438526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.012 [2024-11-20 07:17:12.438534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.012 [2024-11-20 07:17:12.438541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.012 [2024-11-20 07:17:12.438549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.012 [2024-11-20 07:17:12.438555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.012 [2024-11-20 07:17:12.438563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x122fbf0 is same with the state(6) to be set 00:21:08.012 [2024-11-20 07:17:12.439545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:08.012 [2024-11-20 07:17:12.439560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:08.012 [2024-11-20 07:17:12.439569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:08.012 [2024-11-20 07:17:12.439578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:21:08.012 [2024-11-20 07:17:12.439652] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:21:08.013 [2024-11-20 07:17:12.439667] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:21:08.013 [2024-11-20 07:17:12.439732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:21:08.013 task offset: 29440 on job bdev=Nvme5n1 fails 00:21:08.013 00:21:08.013 Latency(us) 00:21:08.013 [2024-11-20T06:17:12.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:08.013 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:08.013 Job: Nvme1n1 ended in about 0.91 seconds with error 00:21:08.013 Verification LBA range: start 0x0 length 0x400 00:21:08.013 Nvme1n1 : 0.91 211.46 13.22 70.49 0.00 224718.58 16298.52 217921.45 00:21:08.013 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:08.013 Job: Nvme2n1 ended in about 0.92 seconds with error 00:21:08.013 Verification LBA range: start 0x0 length 0x400 00:21:08.013 Nvme2n1 : 0.92 214.63 13.41 69.73 0.00 218930.63 17666.23 188743.68 00:21:08.013 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:08.013 Job: Nvme3n1 ended in about 0.92 seconds with error 00:21:08.013 Verification LBA range: start 0x0 length 0x400 00:21:08.013 Nvme3n1 : 0.92 241.49 15.09 36.98 0.00 217654.54 20971.52 205156.17 00:21:08.013 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:08.013 Job: Nvme4n1 ended in about 0.92 seconds with error 00:21:08.013 Verification LBA range: start 0x0 length 0x400 00:21:08.013 Nvme4n1 : 0.92 208.40 13.03 69.47 0.00 216147.26 15956.59 222480.47 00:21:08.013 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:08.013 Job: Nvme5n1 ended in about 0.90 seconds with error 00:21:08.013 Verification LBA range: start 0x0 length 0x400 00:21:08.013 Nvme5n1 : 0.90 213.26 13.33 71.09 0.00 206878.11 2735.42 224304.08 00:21:08.013 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:08.013 Job: Nvme6n1 ended in about 0.91 seconds with error 00:21:08.013 Verification LBA range: start 0x0 length 0x400 00:21:08.013 Nvme6n1 : 0.91 212.11 13.26 70.70 0.00 204159.78 17552.25 224304.08 00:21:08.013 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:08.013 Job: Nvme7n1 ended in about 0.92 seconds with error 00:21:08.013 Verification LBA range: start 0x0 length 0x400 00:21:08.013 Nvme7n1 : 0.92 207.95 13.00 69.32 0.00 204650.85 15500.69 203332.56 00:21:08.013 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:08.013 Job: Nvme8n1 ended in about 0.93 seconds with error 00:21:08.013 Verification LBA range: start 0x0 length 0x400 00:21:08.013 Nvme8n1 : 0.93 211.83 13.24 69.17 0.00 198095.38 15044.79 220656.86 00:21:08.013 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:08.013 Job: Nvme9n1 ended in about 0.90 seconds with error 00:21:08.013 Verification LBA range: start 0x0 length 0x400 00:21:08.013 Nvme9n1 : 0.90 213.02 13.31 71.01 0.00 191270.23 5043.42 233422.14 00:21:08.013 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:08.013 Job: Nvme10n1 ended in about 0.93 seconds with error 00:21:08.013 Verification LBA range: start 0x0 length 0x400 00:21:08.013 Nvme10n1 : 0.93 138.04 8.63 69.02 0.00 258554.06 18122.13 244363.80 00:21:08.013 [2024-11-20T06:17:12.569Z] =================================================================================================================== 00:21:08.013 [2024-11-20T06:17:12.569Z] Total : 2072.19 129.51 666.97 0.00 212954.41 2735.42 244363.80 00:21:08.013 [2024-11-20 07:17:12.471168] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:08.013 [2024-11-20 07:17:12.471219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:08.013 [2024-11-20 07:17:12.471487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:08.013 [2024-11-20 07:17:12.471505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe01d0 with addr=10.0.0.2, port=4420 00:21:08.013 [2024-11-20 07:17:12.471516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe01d0 is same with the state(6) to be set 00:21:08.013 [2024-11-20 07:17:12.471738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:08.013 [2024-11-20 07:17:12.471749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfebd30 with addr=10.0.0.2, port=4420 00:21:08.013 [2024-11-20 07:17:12.471756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfebd30 is same with the state(6) to be set 00:21:08.013 [2024-11-20 07:17:12.471889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:08.013 [2024-11-20 07:17:12.471899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe8ce0 with addr=10.0.0.2, port=4420 00:21:08.013 [2024-11-20 07:17:12.471906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe8ce0 is same with the state(6) to be set 00:21:08.013 [2024-11-20 07:17:12.472094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:08.013 [2024-11-20 07:17:12.472106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140ce30 with addr=10.0.0.2, port=4420 00:21:08.013 [2024-11-20 07:17:12.472113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140ce30 is same with the state(6) to be set 00:21:08.013 [2024-11-20 07:17:12.473500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:08.013 [2024-11-20 07:17:12.473515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:08.013 [2024-11-20 07:17:12.473525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:21:08.013 [2024-11-20 07:17:12.473533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:08.013 [2024-11-20 07:17:12.473835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:08.013 [2024-11-20 07:17:12.473850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x144a750 with addr=10.0.0.2, port=4420 00:21:08.013 [2024-11-20 07:17:12.473858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a750 is same with the state(6) to be set 00:21:08.013 [2024-11-20 07:17:12.474075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:08.013 [2024-11-20 07:17:12.474086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143d2f0 with addr=10.0.0.2, port=4420 00:21:08.013 [2024-11-20 07:17:12.474094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x143d2f0 is same with the state(6) to be set 00:21:08.013 [2024-11-20 07:17:12.474106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe01d0 (9): Bad file descriptor 00:21:08.013 [2024-11-20 07:17:12.474119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfebd30 (9): Bad file descriptor 00:21:08.013 [2024-11-20 07:17:12.474128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe8ce0 (9): Bad file descriptor 00:21:08.013 [2024-11-20 07:17:12.474137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x140ce30 (9): Bad file descriptor 00:21:08.013 [2024-11-20 07:17:12.474171] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:21:08.013 [2024-11-20 07:17:12.474188] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:21:08.013 [2024-11-20 07:17:12.474197] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:21:08.013 [2024-11-20 07:17:12.474207] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:21:08.013 [2024-11-20 07:17:12.474423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:08.013 [2024-11-20 07:17:12.474435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfec1b0 with addr=10.0.0.2, port=4420 00:21:08.013 [2024-11-20 07:17:12.474442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfec1b0 is same with the state(6) to be set 00:21:08.013 [2024-11-20 07:17:12.474584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:08.013 [2024-11-20 07:17:12.474595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1417930 with addr=10.0.0.2, port=4420 00:21:08.013 [2024-11-20 07:17:12.474601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1417930 is same with the state(6) to be set 00:21:08.013 [2024-11-20 07:17:12.474700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:08.013 [2024-11-20 07:17:12.474710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1446110 with addr=10.0.0.2, port=4420 00:21:08.013 [2024-11-20 07:17:12.474716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446110 is same with the state(6) to be set 00:21:08.013 [2024-11-20 07:17:12.474840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:08.014 [2024-11-20 07:17:12.474852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140d010 with addr=10.0.0.2, port=4420 00:21:08.014 [2024-11-20 07:17:12.474859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d010 is same with the state(6) to be set 00:21:08.014 [2024-11-20 07:17:12.474867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144a750 (9): Bad file descriptor 00:21:08.014 [2024-11-20 07:17:12.474876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x143d2f0 (9): Bad file descriptor 00:21:08.014 [2024-11-20 07:17:12.474884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:08.014 [2024-11-20 07:17:12.474890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:08.014 [2024-11-20 07:17:12.474898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:08.014 [2024-11-20 07:17:12.474906] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:08.014 [2024-11-20 07:17:12.474914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:08.014 [2024-11-20 07:17:12.474920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:08.014 [2024-11-20 07:17:12.474926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:08.014 [2024-11-20 07:17:12.474932] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:08.014 [2024-11-20 07:17:12.474939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:08.014 [2024-11-20 07:17:12.474944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:08.014 [2024-11-20 07:17:12.474955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:08.014 [2024-11-20 07:17:12.474961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:08.014 [2024-11-20 07:17:12.474970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:21:08.014 [2024-11-20 07:17:12.474976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:21:08.014 [2024-11-20 07:17:12.474982] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:21:08.014 [2024-11-20 07:17:12.474988] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:21:08.014 [2024-11-20 07:17:12.475060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfec1b0 (9): Bad file descriptor 00:21:08.014 [2024-11-20 07:17:12.475071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1417930 (9): Bad file descriptor 00:21:08.014 [2024-11-20 07:17:12.475079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1446110 (9): Bad file descriptor 00:21:08.014 [2024-11-20 07:17:12.475088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x140d010 (9): Bad file descriptor 00:21:08.014 [2024-11-20 07:17:12.475095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:21:08.014 [2024-11-20 07:17:12.475101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:21:08.014 [2024-11-20 07:17:12.475107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:21:08.014 [2024-11-20 07:17:12.475113] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:21:08.014 [2024-11-20 07:17:12.475120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:08.014 [2024-11-20 07:17:12.475126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:08.014 [2024-11-20 07:17:12.475133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:08.014 [2024-11-20 07:17:12.475138] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:08.014 [2024-11-20 07:17:12.475160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:08.014 [2024-11-20 07:17:12.475167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:08.014 [2024-11-20 07:17:12.475173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:08.014 [2024-11-20 07:17:12.475179] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:08.014 [2024-11-20 07:17:12.475185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:08.014 [2024-11-20 07:17:12.475192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:08.014 [2024-11-20 07:17:12.475199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:08.014 [2024-11-20 07:17:12.475205] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:08.014 [2024-11-20 07:17:12.475211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:21:08.014 [2024-11-20 07:17:12.475217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:21:08.014 [2024-11-20 07:17:12.475224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:21:08.014 [2024-11-20 07:17:12.475230] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:21:08.014 [2024-11-20 07:17:12.475239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:08.014 [2024-11-20 07:17:12.475245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:08.014 [2024-11-20 07:17:12.475252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:08.014 [2024-11-20 07:17:12.475258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:08.274 07:17:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:21:09.654 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1250607 00:21:09.654 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:21:09.654 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1250607 00:21:09.654 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:21:09.654 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:09.654 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:21:09.654 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:09.654 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 1250607 00:21:09.654 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:21:09.654 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:09.654 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:21:09.655 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:21:09.655 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:21:09.655 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:09.655 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:21:09.655 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:09.655 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:09.655 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:09.655 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:09.655 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:09.655 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:21:09.655 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:09.655 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:21:09.655 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:09.655 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:09.655 rmmod nvme_tcp 00:21:09.655 rmmod nvme_fabrics 00:21:09.655 rmmod nvme_keyring 00:21:09.655 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:09.655 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:21:09.655 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:21:09.655 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 1250328 ']' 00:21:09.655 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 1250328 00:21:09.655 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 1250328 ']' 00:21:09.655 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 1250328 00:21:09.655 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (1250328) - No such process 00:21:09.655 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@979 -- # echo 'Process with pid 1250328 is not found' 00:21:09.655 Process with pid 1250328 is not found 00:21:09.655 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:09.655 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:09.655 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:09.655 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:21:09.655 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:21:09.655 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:09.655 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:21:09.655 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:09.655 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:09.655 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:09.655 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:09.655 07:17:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:11.563 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:11.563 00:21:11.563 real 0m8.100s 00:21:11.563 user 0m20.665s 00:21:11.563 sys 0m1.387s 00:21:11.563 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:11.563 07:17:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:11.563 ************************************ 00:21:11.563 END TEST nvmf_shutdown_tc3 00:21:11.563 ************************************ 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:11.563 ************************************ 00:21:11.563 START TEST nvmf_shutdown_tc4 00:21:11.563 ************************************ 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc4 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:11.563 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:11.564 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:11.564 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:11.564 Found net devices under 0000:86:00.0: cvl_0_0 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:11.564 Found net devices under 0000:86:00.1: cvl_0_1 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:11.564 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:11.824 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:11.824 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:11.824 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:11.824 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:11.824 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:11.824 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:11.824 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:11.824 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:11.824 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:11.824 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.431 ms 00:21:11.824 00:21:11.824 --- 10.0.0.2 ping statistics --- 00:21:11.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.824 rtt min/avg/max/mdev = 0.431/0.431/0.431/0.000 ms 00:21:11.824 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:11.824 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:11.824 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:21:11.824 00:21:11.824 --- 10.0.0.1 ping statistics --- 00:21:11.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.824 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:21:11.824 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:11.824 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:21:11.824 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:11.824 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:11.824 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:11.824 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:11.824 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:11.824 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:11.824 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:11.824 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:11.824 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:11.824 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:11.824 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:11.824 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=1251857 00:21:11.824 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 1251857 00:21:11.824 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:11.824 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@833 -- # '[' -z 1251857 ']' 00:21:11.824 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.824 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:11.824 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:11.824 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:11.824 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:12.083 [2024-11-20 07:17:16.405592] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:21:12.083 [2024-11-20 07:17:16.405648] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:12.083 [2024-11-20 07:17:16.484768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:12.083 [2024-11-20 07:17:16.527025] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:12.083 [2024-11-20 07:17:16.527065] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:12.083 [2024-11-20 07:17:16.527072] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:12.083 [2024-11-20 07:17:16.527078] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:12.083 [2024-11-20 07:17:16.527083] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:12.083 [2024-11-20 07:17:16.528639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:12.083 [2024-11-20 07:17:16.528670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:12.083 [2024-11-20 07:17:16.528777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:12.083 [2024-11-20 07:17:16.528778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:12.083 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:12.083 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@866 -- # return 0 00:21:12.083 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:12.083 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:12.083 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:12.342 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:12.342 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:12.342 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.342 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:12.342 [2024-11-20 07:17:16.666475] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:12.342 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.343 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:12.343 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:12.343 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:12.343 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:12.343 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:12.343 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:12.343 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:12.343 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:12.343 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:12.343 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:12.343 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:12.343 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:12.343 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:12.343 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:12.343 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:12.343 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:12.343 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:12.343 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:12.343 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:12.343 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:12.343 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:12.343 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:12.343 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:12.343 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:12.343 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:12.343 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:12.343 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.343 07:17:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:12.343 Malloc1 00:21:12.343 [2024-11-20 07:17:16.770954] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:12.343 Malloc2 00:21:12.343 Malloc3 00:21:12.343 Malloc4 00:21:12.602 Malloc5 00:21:12.602 Malloc6 00:21:12.602 Malloc7 00:21:12.602 Malloc8 00:21:12.602 Malloc9 00:21:12.602 Malloc10 00:21:12.861 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.861 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:12.861 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:12.861 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:12.861 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1251932 00:21:12.861 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:21:12.861 07:17:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:21:12.861 [2024-11-20 07:17:17.276139] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:18.142 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:18.142 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1251857 00:21:18.142 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 1251857 ']' 00:21:18.142 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 1251857 00:21:18.142 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # uname 00:21:18.142 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:18.142 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1251857 00:21:18.143 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:18.143 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:18.143 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1251857' 00:21:18.143 killing process with pid 1251857 00:21:18.143 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@971 -- # kill 1251857 00:21:18.143 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@976 -- # wait 1251857 00:21:18.143 [2024-11-20 07:17:22.268766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf907f0 is same with the state(6) to be set 00:21:18.143 [2024-11-20 07:17:22.268817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf907f0 is same with the state(6) to be set 00:21:18.143 [2024-11-20 07:17:22.268826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf907f0 is same with the state(6) to be set 00:21:18.143 [2024-11-20 07:17:22.268833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf907f0 is same with the state(6) to be set 00:21:18.143 [2024-11-20 07:17:22.268840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf907f0 is same with the state(6) to be set 00:21:18.143 [2024-11-20 07:17:22.268847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf907f0 is same with the state(6) to be set 00:21:18.143 [2024-11-20 07:17:22.268853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf907f0 is same with the state(6) to be set 00:21:18.143 [2024-11-20 07:17:22.268859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf907f0 is same with the state(6) to be set 00:21:18.143 [2024-11-20 07:17:22.268865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf907f0 is same with the state(6) to be set 00:21:18.143 [2024-11-20 07:17:22.269569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90cc0 is same with the state(6) to be set 00:21:18.143 [2024-11-20 07:17:22.269595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90cc0 is same with the state(6) to be set 00:21:18.143 [2024-11-20 07:17:22.269603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90cc0 is same with the state(6) to be set 00:21:18.143 [2024-11-20 07:17:22.269610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90cc0 is same with the state(6) to be set 00:21:18.143 [2024-11-20 07:17:22.269617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90cc0 is same with the state(6) to be set 00:21:18.143 [2024-11-20 07:17:22.269623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90cc0 is same with the state(6) to be set 00:21:18.143 [2024-11-20 07:17:22.269637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90cc0 is same with the state(6) to be set 00:21:18.143 [2024-11-20 07:17:22.269643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90cc0 is same with the state(6) to be set 00:21:18.143 [2024-11-20 07:17:22.269650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90cc0 is same with the state(6) to be set 00:21:18.143 [2024-11-20 07:17:22.269657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90cc0 is same with the state(6) to be set 00:21:18.143 [2024-11-20 07:17:22.270411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8fe50 is same with the state(6) to be set 00:21:18.143 [2024-11-20 07:17:22.270438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8fe50 is same with the state(6) to be set 00:21:18.143 [2024-11-20 07:17:22.270446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8fe50 is same with the state(6) to be set 00:21:18.143 [2024-11-20 07:17:22.270453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8fe50 is same with the state(6) to be set 00:21:18.143 [2024-11-20 07:17:22.270460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8fe50 is same with the state(6) to be set 00:21:18.143 [2024-11-20 07:17:22.270466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8fe50 is same with the state(6) to be set 00:21:18.143 [2024-11-20 07:17:22.275009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1ca70 is same with the state(6) to be set 00:21:18.143 [2024-11-20 07:17:22.275033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1ca70 is same with the state(6) to be set 00:21:18.143 Write completed with error (sct=0, sc=8) 00:21:18.143 starting I/O failed: -6 00:21:18.143 Write completed with error (sct=0, sc=8) 00:21:18.143 Write completed with error (sct=0, sc=8) 00:21:18.143 Write completed with error (sct=0, sc=8) 00:21:18.143 Write completed with error (sct=0, sc=8) 00:21:18.143 starting I/O failed: -6 00:21:18.143 Write completed with error (sct=0, sc=8) 00:21:18.143 Write completed with error (sct=0, sc=8) 00:21:18.143 Write completed with error (sct=0, sc=8) 00:21:18.143 Write completed with error (sct=0, sc=8) 00:21:18.143 starting I/O failed: -6 00:21:18.143 Write completed with error (sct=0, sc=8) 00:21:18.143 Write completed with error (sct=0, sc=8) 00:21:18.143 Write completed with error (sct=0, sc=8) 00:21:18.143 Write completed with error (sct=0, sc=8) 00:21:18.143 starting I/O failed: -6 00:21:18.143 Write completed with error (sct=0, sc=8) 00:21:18.143 Write completed with error (sct=0, sc=8) 00:21:18.143 [2024-11-20 07:17:22.275290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf60 is same with the state(6) to be set 00:21:18.143 Write completed with error (sct=0, sc=8) 00:21:18.143 [2024-11-20 07:17:22.275312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf60 is same with the state(6) to be set 00:21:18.143 [2024-11-20 07:17:22.275319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf60 is same with the state(6) to be set 00:21:18.143 Write completed with error (sct=0, sc=8) 00:21:18.143 [2024-11-20 07:17:22.275326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf60 is same with the state(6) to be set 00:21:18.143 starting I/O failed: -6 00:21:18.143 [2024-11-20 07:17:22.275333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf60 is same with the state(6) to be set 00:21:18.143 [2024-11-20 07:17:22.275340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf60 is same with the state(6) to be set 00:21:18.143 Write completed with error (sct=0, sc=8) 00:21:18.143 [2024-11-20 07:17:22.275347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf60 is same with the state(6) to be set 00:21:18.143 [2024-11-20 07:17:22.275353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf60 is same with the state(6) to be set 00:21:18.143 Write completed with error (sct=0, sc=8) 00:21:18.143 [2024-11-20 07:17:22.275359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1cf60 is same with the state(6) to be set 00:21:18.143 Write completed with error (sct=0, sc=8) 00:21:18.143 Write completed with error (sct=0, sc=8) 00:21:18.143 starting I/O failed: -6 00:21:18.143 Write completed with error (sct=0, sc=8) 00:21:18.143 Write completed with error (sct=0, sc=8) 00:21:18.143 Write completed with error (sct=0, sc=8) 00:21:18.143 Write completed with error (sct=0, sc=8) 00:21:18.143 starting I/O failed: -6 00:21:18.143 Write completed with error (sct=0, sc=8) 00:21:18.143 Write completed with error (sct=0, sc=8) 00:21:18.143 Write completed with error (sct=0, sc=8) 00:21:18.143 Write completed with error (sct=0, sc=8) 00:21:18.144 starting I/O failed: -6 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 starting I/O failed: -6 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 starting I/O failed: -6 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 [2024-11-20 07:17:22.275721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 starting I/O failed: -6 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 starting I/O failed: -6 00:21:18.144 [2024-11-20 07:17:22.275901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1d450 is same with the state(6) to be set 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 [2024-11-20 07:17:22.275926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1d450 is same with the state(6) to be set 00:21:18.144 [2024-11-20 07:17:22.275934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1d450 is same with the state(6) to be set 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 [2024-11-20 07:17:22.275940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1d450 is same with the state(6) to be set 00:21:18.144 starting I/O failed: -6 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 starting I/O failed: -6 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 starting I/O failed: -6 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 starting I/O failed: -6 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 starting I/O failed: -6 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 starting I/O failed: -6 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 starting I/O failed: -6 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 starting I/O failed: -6 00:21:18.144 [2024-11-20 07:17:22.276186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe91f70 is same with the state(6) to be set 00:21:18.144 [2024-11-20 07:17:22.276207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe91f70 is same with tWrite completed with error (sct=0, sc=8) 00:21:18.144 he state(6) to be set 00:21:18.144 [2024-11-20 07:17:22.276216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe91f70 is same with the state(6) to be set 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 [2024-11-20 07:17:22.276223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe91f70 is same with the state(6) to be set 00:21:18.144 starting I/O failed: -6 00:21:18.144 [2024-11-20 07:17:22.276230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe91f70 is same with the state(6) to be set 00:21:18.144 [2024-11-20 07:17:22.276237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe91f70 is same with the state(6) to be set 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 starting I/O failed: -6 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 starting I/O failed: -6 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 starting I/O failed: -6 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 starting I/O failed: -6 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 starting I/O failed: -6 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 starting I/O failed: -6 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 starting I/O failed: -6 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 starting I/O failed: -6 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 starting I/O failed: -6 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 starting I/O failed: -6 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 starting I/O failed: -6 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 [2024-11-20 07:17:22.276646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:18.144 starting I/O failed: -6 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 starting I/O failed: -6 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 starting I/O failed: -6 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 starting I/O failed: -6 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 starting I/O failed: -6 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 starting I/O failed: -6 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 starting I/O failed: -6 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 starting I/O failed: -6 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 starting I/O failed: -6 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 starting I/O failed: -6 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 starting I/O failed: -6 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 starting I/O failed: -6 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 starting I/O failed: -6 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 starting I/O failed: -6 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 starting I/O failed: -6 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 starting I/O failed: -6 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 starting I/O failed: -6 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 starting I/O failed: -6 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 starting I/O failed: -6 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 starting I/O failed: -6 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 starting I/O failed: -6 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.144 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 [2024-11-20 07:17:22.277646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 [2024-11-20 07:17:22.278450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf929a0 is same with the state(6) to be set 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 [2024-11-20 07:17:22.278471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf929a0 is same with the state(6) to be set 00:21:18.145 [2024-11-20 07:17:22.278480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf929a0 is same with the state(6) to be set 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 [2024-11-20 07:17:22.278487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf929a0 is same with the state(6) to be set 00:21:18.145 starting I/O failed: -6 00:21:18.145 [2024-11-20 07:17:22.278494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf929a0 is same with the state(6) to be set 00:21:18.145 [2024-11-20 07:17:22.278500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf929a0 is same with the state(6) to be set 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 [2024-11-20 07:17:22.278513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf929a0 is same with the state(6) to be set 00:21:18.145 [2024-11-20 07:17:22.278520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf929a0 is same with the state(6) to be set 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.145 Write completed with error (sct=0, sc=8) 00:21:18.145 starting I/O failed: -6 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 starting I/O failed: -6 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 starting I/O failed: -6 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 starting I/O failed: -6 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 starting I/O failed: -6 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 starting I/O failed: -6 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 starting I/O failed: -6 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 starting I/O failed: -6 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 starting I/O failed: -6 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 starting I/O failed: -6 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 starting I/O failed: -6 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 starting I/O failed: -6 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 starting I/O failed: -6 00:21:18.146 [2024-11-20 07:17:22.279262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.146 NVMe io qpair process completion error 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 starting I/O failed: -6 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 starting I/O failed: -6 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 starting I/O failed: -6 00:21:18.146 [2024-11-20 07:17:22.280795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd20c70 is same with the state(6) to be set 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 [2024-11-20 07:17:22.280817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd20c70 is same with the state(6) to be set 00:21:18.146 [2024-11-20 07:17:22.280825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd20c70 is same with the state(6) to be set 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 [2024-11-20 07:17:22.280832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd20c70 is same with the state(6) to be set 00:21:18.146 [2024-11-20 07:17:22.280838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd20c70 is same with the state(6) to be set 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 [2024-11-20 07:17:22.280844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd20c70 is same with the state(6) to be set 00:21:18.146 [2024-11-20 07:17:22.280851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd20c70 is same with the state(6) to be set 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 starting I/O failed: -6 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 starting I/O failed: -6 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 starting I/O failed: -6 00:21:18.146 [2024-11-20 07:17:22.281016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.146 starting I/O failed: -6 00:21:18.146 starting I/O failed: -6 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 starting I/O failed: -6 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 starting I/O failed: -6 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 [2024-11-20 07:17:22.281510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fde0 is same with tstarting I/O failed: -6 00:21:18.146 he state(6) to be set 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 [2024-11-20 07:17:22.281537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fde0 is same with the state(6) to be set 00:21:18.146 starting I/O failed: -6 00:21:18.146 [2024-11-20 07:17:22.281545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fde0 is same with the state(6) to be set 00:21:18.146 [2024-11-20 07:17:22.281552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fde0 is same with the state(6) to be set 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 [2024-11-20 07:17:22.281558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fde0 is same with the state(6) to be set 00:21:18.146 [2024-11-20 07:17:22.281565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fde0 is same with the state(6) to be set 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 [2024-11-20 07:17:22.281571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fde0 is same with the state(6) to be set 00:21:18.146 [2024-11-20 07:17:22.281578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fde0 is same with the state(6) to be set 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 [2024-11-20 07:17:22.281584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fde0 is same with the state(6) to be set 00:21:18.146 starting I/O failed: -6 00:21:18.146 [2024-11-20 07:17:22.281590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fde0 is same with the state(6) to be set 00:21:18.146 [2024-11-20 07:17:22.281597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fde0 is same with the state(6) to be set 00:21:18.146 [2024-11-20 07:17:22.281603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fde0 is same with the state(6) to be set 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 [2024-11-20 07:17:22.281609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1fde0 is same with tstarting I/O failed: -6 00:21:18.146 he state(6) to be set 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 Write completed with error (sct=0, sc=8) 00:21:18.146 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 [2024-11-20 07:17:22.283040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.147 Write completed with error (sct=0, sc=8) 00:21:18.147 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 [2024-11-20 07:17:22.285006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:18.148 NVMe io qpair process completion error 00:21:18.148 [2024-11-20 07:17:22.285438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd21160 is same with the state(6) to be set 00:21:18.148 [2024-11-20 07:17:22.285459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd21160 is same with the state(6) to be set 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 [2024-11-20 07:17:22.286085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:18.148 Write completed with error (sct=0, sc=8) 00:21:18.148 starting I/O failed: -6 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 starting I/O failed: -6 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 starting I/O failed: -6 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 starting I/O failed: -6 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 starting I/O failed: -6 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 starting I/O failed: -6 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 starting I/O failed: -6 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 starting I/O failed: -6 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 starting I/O failed: -6 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 starting I/O failed: -6 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 starting I/O failed: -6 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 starting I/O failed: -6 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 starting I/O failed: -6 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 starting I/O failed: -6 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 starting I/O failed: -6 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 starting I/O failed: -6 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 starting I/O failed: -6 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 starting I/O failed: -6 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 starting I/O failed: -6 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 starting I/O failed: -6 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 starting I/O failed: -6 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 starting I/O failed: -6 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 starting I/O failed: -6 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 [2024-11-20 07:17:22.287002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 starting I/O failed: -6 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 starting I/O failed: -6 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 starting I/O failed: -6 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 starting I/O failed: -6 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 starting I/O failed: -6 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 starting I/O failed: -6 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 starting I/O failed: -6 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 starting I/O failed: -6 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 starting I/O failed: -6 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 starting I/O failed: -6 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 starting I/O failed: -6 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 starting I/O failed: -6 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 starting I/O failed: -6 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 starting I/O failed: -6 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 starting I/O failed: -6 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 starting I/O failed: -6 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 starting I/O failed: -6 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 starting I/O failed: -6 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 starting I/O failed: -6 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 starting I/O failed: -6 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 starting I/O failed: -6 00:21:18.149 Write completed with error (sct=0, sc=8) 00:21:18.149 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 [2024-11-20 07:17:22.288007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 [2024-11-20 07:17:22.289578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:18.150 NVMe io qpair process completion error 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 starting I/O failed: -6 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.150 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 starting I/O failed: -6 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 starting I/O failed: -6 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 starting I/O failed: -6 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 starting I/O failed: -6 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 starting I/O failed: -6 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 starting I/O failed: -6 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 starting I/O failed: -6 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 starting I/O failed: -6 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 starting I/O failed: -6 00:21:18.151 [2024-11-20 07:17:22.290549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 starting I/O failed: -6 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 starting I/O failed: -6 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 starting I/O failed: -6 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 starting I/O failed: -6 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 starting I/O failed: -6 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 starting I/O failed: -6 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 starting I/O failed: -6 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 starting I/O failed: -6 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 starting I/O failed: -6 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 starting I/O failed: -6 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 starting I/O failed: -6 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 starting I/O failed: -6 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 starting I/O failed: -6 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 starting I/O failed: -6 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 starting I/O failed: -6 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 starting I/O failed: -6 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 starting I/O failed: -6 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 starting I/O failed: -6 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 starting I/O failed: -6 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 starting I/O failed: -6 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 starting I/O failed: -6 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 starting I/O failed: -6 00:21:18.151 [2024-11-20 07:17:22.291421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 starting I/O failed: -6 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 starting I/O failed: -6 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 starting I/O failed: -6 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 starting I/O failed: -6 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 starting I/O failed: -6 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 starting I/O failed: -6 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 starting I/O failed: -6 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 starting I/O failed: -6 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 starting I/O failed: -6 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 starting I/O failed: -6 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 starting I/O failed: -6 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 starting I/O failed: -6 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.151 starting I/O failed: -6 00:21:18.151 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 [2024-11-20 07:17:22.292973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.152 starting I/O failed: -6 00:21:18.152 Write completed with error (sct=0, sc=8) 00:21:18.153 starting I/O failed: -6 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 starting I/O failed: -6 00:21:18.153 [2024-11-20 07:17:22.294806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:18.153 NVMe io qpair process completion error 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 starting I/O failed: -6 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 starting I/O failed: -6 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 starting I/O failed: -6 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 starting I/O failed: -6 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 starting I/O failed: -6 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 starting I/O failed: -6 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 starting I/O failed: -6 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 starting I/O failed: -6 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 starting I/O failed: -6 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 starting I/O failed: -6 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 [2024-11-20 07:17:22.295845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:18.153 starting I/O failed: -6 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 starting I/O failed: -6 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 starting I/O failed: -6 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 starting I/O failed: -6 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 starting I/O failed: -6 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 starting I/O failed: -6 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 starting I/O failed: -6 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 starting I/O failed: -6 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 starting I/O failed: -6 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 starting I/O failed: -6 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 starting I/O failed: -6 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 starting I/O failed: -6 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 starting I/O failed: -6 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 starting I/O failed: -6 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 starting I/O failed: -6 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 starting I/O failed: -6 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 starting I/O failed: -6 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 starting I/O failed: -6 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 starting I/O failed: -6 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 starting I/O failed: -6 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 starting I/O failed: -6 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 starting I/O failed: -6 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 [2024-11-20 07:17:22.296735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 starting I/O failed: -6 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 starting I/O failed: -6 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 starting I/O failed: -6 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 starting I/O failed: -6 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 starting I/O failed: -6 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 starting I/O failed: -6 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 starting I/O failed: -6 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 starting I/O failed: -6 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 starting I/O failed: -6 00:21:18.153 Write completed with error (sct=0, sc=8) 00:21:18.153 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 [2024-11-20 07:17:22.297787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.154 starting I/O failed: -6 00:21:18.154 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 [2024-11-20 07:17:22.301670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:18.155 NVMe io qpair process completion error 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 [2024-11-20 07:17:22.303490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.155 Write completed with error (sct=0, sc=8) 00:21:18.155 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 [2024-11-20 07:17:22.304477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.156 Write completed with error (sct=0, sc=8) 00:21:18.156 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 [2024-11-20 07:17:22.308128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:18.157 NVMe io qpair process completion error 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 [2024-11-20 07:17:22.309462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 Write completed with error (sct=0, sc=8) 00:21:18.157 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 [2024-11-20 07:17:22.310382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 [2024-11-20 07:17:22.311365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.158 starting I/O failed: -6 00:21:18.158 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 [2024-11-20 07:17:22.313172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.159 NVMe io qpair process completion error 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 [2024-11-20 07:17:22.314364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 Write completed with error (sct=0, sc=8) 00:21:18.159 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 [2024-11-20 07:17:22.315246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 [2024-11-20 07:17:22.316305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.160 Write completed with error (sct=0, sc=8) 00:21:18.160 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 [2024-11-20 07:17:22.318099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:18.161 NVMe io qpair process completion error 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.161 Write completed with error (sct=0, sc=8) 00:21:18.161 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 starting I/O failed: -6 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.162 Write completed with error (sct=0, sc=8) 00:21:18.162 starting I/O failed: -6 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 starting I/O failed: -6 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 starting I/O failed: -6 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 starting I/O failed: -6 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 starting I/O failed: -6 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 starting I/O failed: -6 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 starting I/O failed: -6 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 starting I/O failed: -6 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 starting I/O failed: -6 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 starting I/O failed: -6 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 starting I/O failed: -6 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 starting I/O failed: -6 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 starting I/O failed: -6 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 starting I/O failed: -6 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 starting I/O failed: -6 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 starting I/O failed: -6 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 starting I/O failed: -6 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 starting I/O failed: -6 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 starting I/O failed: -6 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 starting I/O failed: -6 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 starting I/O failed: -6 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 starting I/O failed: -6 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 starting I/O failed: -6 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 starting I/O failed: -6 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 starting I/O failed: -6 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 starting I/O failed: -6 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 starting I/O failed: -6 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 starting I/O failed: -6 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 starting I/O failed: -6 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 starting I/O failed: -6 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 starting I/O failed: -6 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 starting I/O failed: -6 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 starting I/O failed: -6 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 starting I/O failed: -6 00:21:18.163 [2024-11-20 07:17:22.323667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:18.163 NVMe io qpair process completion error 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 starting I/O failed: -6 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 starting I/O failed: -6 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 starting I/O failed: -6 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 starting I/O failed: -6 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 starting I/O failed: -6 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 starting I/O failed: -6 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 starting I/O failed: -6 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 starting I/O failed: -6 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 starting I/O failed: -6 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 starting I/O failed: -6 00:21:18.163 [2024-11-20 07:17:22.324651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 starting I/O failed: -6 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.163 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 [2024-11-20 07:17:22.325584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 [2024-11-20 07:17:22.326599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.164 Write completed with error (sct=0, sc=8) 00:21:18.164 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 Write completed with error (sct=0, sc=8) 00:21:18.165 starting I/O failed: -6 00:21:18.165 [2024-11-20 07:17:22.329934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:18.165 NVMe io qpair process completion error 00:21:18.165 Initializing NVMe Controllers 00:21:18.165 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:21:18.165 Controller IO queue size 128, less than required. 00:21:18.165 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:18.165 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:21:18.165 Controller IO queue size 128, less than required. 00:21:18.165 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:18.165 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:21:18.165 Controller IO queue size 128, less than required. 00:21:18.165 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:18.165 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:21:18.165 Controller IO queue size 128, less than required. 00:21:18.165 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:18.165 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:21:18.165 Controller IO queue size 128, less than required. 00:21:18.165 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:18.165 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:21:18.165 Controller IO queue size 128, less than required. 00:21:18.165 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:18.165 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:21:18.165 Controller IO queue size 128, less than required. 00:21:18.165 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:18.165 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:18.165 Controller IO queue size 128, less than required. 00:21:18.165 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:18.165 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:21:18.165 Controller IO queue size 128, less than required. 00:21:18.166 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:18.166 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:21:18.166 Controller IO queue size 128, less than required. 00:21:18.166 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:18.166 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:21:18.166 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:21:18.166 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:21:18.166 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:21:18.166 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:21:18.166 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:21:18.166 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:21:18.166 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:18.166 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:21:18.166 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:21:18.166 Initialization complete. Launching workers. 00:21:18.166 ======================================================== 00:21:18.166 Latency(us) 00:21:18.166 Device Information : IOPS MiB/s Average min max 00:21:18.166 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2126.39 91.37 60193.30 754.37 99080.54 00:21:18.166 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2132.50 91.63 60052.04 727.13 131227.58 00:21:18.166 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2127.45 91.41 59528.84 949.47 110581.01 00:21:18.166 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2136.71 91.81 59294.76 418.93 107906.70 00:21:18.166 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2128.92 91.48 59524.91 919.16 105927.61 00:21:18.166 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2122.81 91.21 59708.64 727.15 101556.72 00:21:18.166 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2171.67 93.31 58385.01 903.85 104436.09 00:21:18.166 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2157.35 92.70 58807.28 1021.10 99393.56 00:21:18.166 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2152.72 92.50 58973.28 914.81 113094.81 00:21:18.166 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2202.21 94.63 57662.29 909.16 115781.79 00:21:18.166 ======================================================== 00:21:18.166 Total : 21458.75 922.06 59205.18 418.93 131227.58 00:21:18.166 00:21:18.166 [2024-11-20 07:17:22.332907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a0ae0 is same with the state(6) to be set 00:21:18.166 [2024-11-20 07:17:22.332973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109fa70 is same with the state(6) to be set 00:21:18.166 [2024-11-20 07:17:22.333006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a0900 is same with the state(6) to be set 00:21:18.166 [2024-11-20 07:17:22.333037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109ebc0 is same with the state(6) to be set 00:21:18.166 [2024-11-20 07:17:22.333065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109f740 is same with the state(6) to be set 00:21:18.166 [2024-11-20 07:17:22.333094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109eef0 is same with the state(6) to be set 00:21:18.166 [2024-11-20 07:17:22.333123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109e560 is same with the state(6) to be set 00:21:18.166 [2024-11-20 07:17:22.333151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a0720 is same with the state(6) to be set 00:21:18.166 [2024-11-20 07:17:22.333180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109f410 is same with the state(6) to be set 00:21:18.166 [2024-11-20 07:17:22.333208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109e890 is same with the state(6) to be set 00:21:18.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:21:18.166 07:17:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:21:19.104 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1251932 00:21:19.104 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:21:19.104 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1251932 00:21:19.104 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:21:19.104 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:19.104 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:21:19.364 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:19.364 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 1251932 00:21:19.364 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:21:19.364 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:19.364 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:19.364 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:19.364 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:21:19.364 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:19.364 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:19.364 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:19.364 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:19.364 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:19.364 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:21:19.364 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:19.364 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:21:19.364 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:19.364 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:19.364 rmmod nvme_tcp 00:21:19.364 rmmod nvme_fabrics 00:21:19.364 rmmod nvme_keyring 00:21:19.364 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:19.364 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:21:19.364 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:21:19.364 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 1251857 ']' 00:21:19.364 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 1251857 00:21:19.364 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 1251857 ']' 00:21:19.364 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 1251857 00:21:19.364 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (1251857) - No such process 00:21:19.364 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@979 -- # echo 'Process with pid 1251857 is not found' 00:21:19.364 Process with pid 1251857 is not found 00:21:19.364 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:19.364 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:19.364 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:19.364 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:21:19.364 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:21:19.364 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:19.364 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:21:19.364 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:19.364 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:19.364 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.364 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:19.364 07:17:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.271 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:21.271 00:21:21.271 real 0m9.767s 00:21:21.271 user 0m24.910s 00:21:21.271 sys 0m5.171s 00:21:21.271 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:21.271 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:21.271 ************************************ 00:21:21.271 END TEST nvmf_shutdown_tc4 00:21:21.271 ************************************ 00:21:21.529 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:21:21.529 00:21:21.529 real 0m41.589s 00:21:21.529 user 1m44.050s 00:21:21.529 sys 0m14.037s 00:21:21.529 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:21.529 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:21.529 ************************************ 00:21:21.529 END TEST nvmf_shutdown 00:21:21.529 ************************************ 00:21:21.529 07:17:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:21.529 07:17:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:21.529 07:17:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:21.529 07:17:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:21.529 ************************************ 00:21:21.529 START TEST nvmf_nsid 00:21:21.529 ************************************ 00:21:21.529 07:17:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:21.529 * Looking for test storage... 00:21:21.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:21.530 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:21.530 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:21:21.530 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:21.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.789 --rc genhtml_branch_coverage=1 00:21:21.789 --rc genhtml_function_coverage=1 00:21:21.789 --rc genhtml_legend=1 00:21:21.789 --rc geninfo_all_blocks=1 00:21:21.789 --rc geninfo_unexecuted_blocks=1 00:21:21.789 00:21:21.789 ' 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:21.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.789 --rc genhtml_branch_coverage=1 00:21:21.789 --rc genhtml_function_coverage=1 00:21:21.789 --rc genhtml_legend=1 00:21:21.789 --rc geninfo_all_blocks=1 00:21:21.789 --rc geninfo_unexecuted_blocks=1 00:21:21.789 00:21:21.789 ' 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:21.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.789 --rc genhtml_branch_coverage=1 00:21:21.789 --rc genhtml_function_coverage=1 00:21:21.789 --rc genhtml_legend=1 00:21:21.789 --rc geninfo_all_blocks=1 00:21:21.789 --rc geninfo_unexecuted_blocks=1 00:21:21.789 00:21:21.789 ' 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:21.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.789 --rc genhtml_branch_coverage=1 00:21:21.789 --rc genhtml_function_coverage=1 00:21:21.789 --rc genhtml_legend=1 00:21:21.789 --rc geninfo_all_blocks=1 00:21:21.789 --rc geninfo_unexecuted_blocks=1 00:21:21.789 00:21:21.789 ' 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:21.789 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:21.790 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.790 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.790 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.790 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:21:21.790 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.790 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:21:21.790 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:21.790 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:21.790 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:21.790 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:21.790 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:21.790 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:21.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:21.790 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:21.790 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:21.790 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:21.790 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:21:21.790 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:21:21.790 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:21:21.790 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:21:21.790 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:21:21.790 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:21:21.790 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:21.790 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:21.790 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:21.790 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:21.790 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:21.790 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.790 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:21.790 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.790 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:21.790 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:21.790 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:21:21.790 07:17:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:28.361 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:28.361 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:21:28.361 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:28.361 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:28.361 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:28.361 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:28.361 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:28.361 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:21:28.361 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:28.361 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:21:28.361 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:21:28.361 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:21:28.361 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:21:28.361 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:21:28.361 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:21:28.361 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:28.361 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:28.361 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:28.361 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:28.361 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:28.361 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:28.361 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:28.361 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:28.361 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:28.361 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:28.361 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:28.361 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:28.361 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:28.361 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:28.361 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:28.361 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:28.361 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:28.361 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:28.361 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:28.361 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:28.361 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:28.361 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:28.361 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:28.361 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:28.361 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:28.361 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:28.361 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:28.362 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:28.362 Found net devices under 0000:86:00.0: cvl_0_0 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:28.362 Found net devices under 0000:86:00.1: cvl_0_1 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:28.362 07:17:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:28.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:28.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.358 ms 00:21:28.362 00:21:28.362 --- 10.0.0.2 ping statistics --- 00:21:28.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.362 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:21:28.362 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:28.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:28.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:21:28.362 00:21:28.362 --- 10.0.0.1 ping statistics --- 00:21:28.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.362 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:21:28.362 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:28.362 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:21:28.362 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:28.362 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:28.362 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:28.362 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:28.362 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:28.362 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:28.362 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:28.362 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:21:28.362 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:28.362 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:28.362 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:28.362 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=1256506 00:21:28.362 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:21:28.362 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 1256506 00:21:28.362 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 1256506 ']' 00:21:28.362 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.362 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:28.362 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:28.362 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:28.362 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:28.362 [2024-11-20 07:17:32.103250] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:21:28.362 [2024-11-20 07:17:32.103293] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:28.363 [2024-11-20 07:17:32.183135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.363 [2024-11-20 07:17:32.224429] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:28.363 [2024-11-20 07:17:32.224468] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:28.363 [2024-11-20 07:17:32.224476] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:28.363 [2024-11-20 07:17:32.224482] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:28.363 [2024-11-20 07:17:32.224488] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:28.363 [2024-11-20 07:17:32.225058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.363 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:28.363 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:21:28.363 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:28.363 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:28.363 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:28.363 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:28.363 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:28.363 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1256629 00:21:28.363 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:21:28.363 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:21:28.363 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:21:28.363 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:21:28.363 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:28.363 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:28.363 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:28.363 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:28.363 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:28.363 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:28.363 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:28.363 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:28.363 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:28.363 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:21:28.363 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:21:28.363 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=0ca29952-73e6-4cf7-8ab9-0dacc80da08a 00:21:28.363 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:21:28.363 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=56bdb6d1-3314-437e-8d0f-8e2b1d6250bf 00:21:28.363 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:21:28.363 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=e2bcfbbf-06dd-46f2-b22c-46192a1e7d71 00:21:28.363 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:21:28.363 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.363 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:28.363 null0 00:21:28.363 null1 00:21:28.363 [2024-11-20 07:17:32.406764] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:21:28.363 [2024-11-20 07:17:32.406809] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1256629 ] 00:21:28.363 null2 00:21:28.363 [2024-11-20 07:17:32.412779] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:28.363 [2024-11-20 07:17:32.436968] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:28.363 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.363 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1256629 /var/tmp/tgt2.sock 00:21:28.363 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 1256629 ']' 00:21:28.363 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:21:28.363 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:28.363 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:21:28.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:21:28.363 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:28.363 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:28.363 [2024-11-20 07:17:32.479139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.363 [2024-11-20 07:17:32.520511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:28.363 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:28.363 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:21:28.363 07:17:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:21:28.624 [2024-11-20 07:17:33.049915] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:28.624 [2024-11-20 07:17:33.066030] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:21:28.624 nvme0n1 nvme0n2 00:21:28.624 nvme1n1 00:21:28.624 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:21:28.624 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:21:28.624 07:17:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:30.004 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:21:30.004 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:21:30.004 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:21:30.004 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:21:30.004 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:21:30.004 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:21:30.004 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:21:30.004 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:21:30.004 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:21:30.004 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:21:30.004 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # '[' 0 -lt 15 ']' 00:21:30.004 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # i=1 00:21:30.004 07:17:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # sleep 1 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 0ca29952-73e6-4cf7-8ab9-0dacc80da08a 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=0ca2995273e64cf78ab90dacc80da08a 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 0CA2995273E64CF78AB90DACC80DA08A 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 0CA2995273E64CF78AB90DACC80DA08A == \0\C\A\2\9\9\5\2\7\3\E\6\4\C\F\7\8\A\B\9\0\D\A\C\C\8\0\D\A\0\8\A ]] 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 56bdb6d1-3314-437e-8d0f-8e2b1d6250bf 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=56bdb6d13314437e8d0f8e2b1d6250bf 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 56BDB6D13314437E8D0F8E2B1D6250BF 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 56BDB6D13314437E8D0F8E2B1D6250BF == \5\6\B\D\B\6\D\1\3\3\1\4\4\3\7\E\8\D\0\F\8\E\2\B\1\D\6\2\5\0\B\F ]] 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid e2bcfbbf-06dd-46f2-b22c-46192a1e7d71 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=e2bcfbbf06dd46f2b22c46192a1e7d71 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo E2BCFBBF06DD46F2B22C46192A1E7D71 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ E2BCFBBF06DD46F2B22C46192A1E7D71 == \E\2\B\C\F\B\B\F\0\6\D\D\4\6\F\2\B\2\2\C\4\6\1\9\2\A\1\E\7\D\7\1 ]] 00:21:30.982 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:21:31.288 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:21:31.288 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:21:31.288 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1256629 00:21:31.288 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 1256629 ']' 00:21:31.288 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 1256629 00:21:31.288 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:21:31.288 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:31.288 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1256629 00:21:31.288 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:31.288 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:31.288 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1256629' 00:21:31.288 killing process with pid 1256629 00:21:31.288 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 1256629 00:21:31.288 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 1256629 00:21:31.548 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:21:31.548 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:31.548 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:21:31.548 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:31.548 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:21:31.548 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:31.548 07:17:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:31.548 rmmod nvme_tcp 00:21:31.548 rmmod nvme_fabrics 00:21:31.548 rmmod nvme_keyring 00:21:31.548 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:31.548 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:21:31.548 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:21:31.548 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 1256506 ']' 00:21:31.548 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 1256506 00:21:31.548 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 1256506 ']' 00:21:31.548 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 1256506 00:21:31.548 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:21:31.548 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:31.548 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1256506 00:21:31.548 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:31.548 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:31.548 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1256506' 00:21:31.548 killing process with pid 1256506 00:21:31.548 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 1256506 00:21:31.548 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 1256506 00:21:31.808 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:31.808 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:31.808 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:31.808 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:21:31.808 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:21:31.808 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:31.808 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:21:31.808 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:31.808 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:31.808 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.808 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:31.808 07:17:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.361 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:34.361 00:21:34.361 real 0m12.361s 00:21:34.361 user 0m9.720s 00:21:34.361 sys 0m5.427s 00:21:34.361 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:34.361 07:17:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:34.361 ************************************ 00:21:34.361 END TEST nvmf_nsid 00:21:34.361 ************************************ 00:21:34.361 07:17:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:21:34.361 00:21:34.361 real 12m5.593s 00:21:34.361 user 26m4.086s 00:21:34.361 sys 3m44.452s 00:21:34.361 07:17:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:34.361 07:17:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:34.361 ************************************ 00:21:34.361 END TEST nvmf_target_extra 00:21:34.361 ************************************ 00:21:34.361 07:17:38 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:34.361 07:17:38 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:34.361 07:17:38 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:34.361 07:17:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:34.361 ************************************ 00:21:34.361 START TEST nvmf_host 00:21:34.361 ************************************ 00:21:34.361 07:17:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:34.361 * Looking for test storage... 00:21:34.361 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:21:34.361 07:17:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:34.361 07:17:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:21:34.361 07:17:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:34.361 07:17:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:34.361 07:17:38 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:34.361 07:17:38 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:34.361 07:17:38 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:34.361 07:17:38 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:34.361 07:17:38 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:34.361 07:17:38 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:34.361 07:17:38 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:34.361 07:17:38 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:34.361 07:17:38 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:34.361 07:17:38 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:34.361 07:17:38 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:34.361 07:17:38 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:21:34.361 07:17:38 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:21:34.361 07:17:38 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:34.361 07:17:38 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:34.361 07:17:38 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:21:34.361 07:17:38 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:21:34.361 07:17:38 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:34.361 07:17:38 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:21:34.361 07:17:38 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:34.361 07:17:38 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:21:34.361 07:17:38 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:21:34.361 07:17:38 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:34.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.362 --rc genhtml_branch_coverage=1 00:21:34.362 --rc genhtml_function_coverage=1 00:21:34.362 --rc genhtml_legend=1 00:21:34.362 --rc geninfo_all_blocks=1 00:21:34.362 --rc geninfo_unexecuted_blocks=1 00:21:34.362 00:21:34.362 ' 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:34.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.362 --rc genhtml_branch_coverage=1 00:21:34.362 --rc genhtml_function_coverage=1 00:21:34.362 --rc genhtml_legend=1 00:21:34.362 --rc geninfo_all_blocks=1 00:21:34.362 --rc geninfo_unexecuted_blocks=1 00:21:34.362 00:21:34.362 ' 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:34.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.362 --rc genhtml_branch_coverage=1 00:21:34.362 --rc genhtml_function_coverage=1 00:21:34.362 --rc genhtml_legend=1 00:21:34.362 --rc geninfo_all_blocks=1 00:21:34.362 --rc geninfo_unexecuted_blocks=1 00:21:34.362 00:21:34.362 ' 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:34.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.362 --rc genhtml_branch_coverage=1 00:21:34.362 --rc genhtml_function_coverage=1 00:21:34.362 --rc genhtml_legend=1 00:21:34.362 --rc geninfo_all_blocks=1 00:21:34.362 --rc geninfo_unexecuted_blocks=1 00:21:34.362 00:21:34.362 ' 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:34.362 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.362 ************************************ 00:21:34.362 START TEST nvmf_multicontroller 00:21:34.362 ************************************ 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:34.362 * Looking for test storage... 00:21:34.362 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:21:34.362 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:34.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.363 --rc genhtml_branch_coverage=1 00:21:34.363 --rc genhtml_function_coverage=1 00:21:34.363 --rc genhtml_legend=1 00:21:34.363 --rc geninfo_all_blocks=1 00:21:34.363 --rc geninfo_unexecuted_blocks=1 00:21:34.363 00:21:34.363 ' 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:34.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.363 --rc genhtml_branch_coverage=1 00:21:34.363 --rc genhtml_function_coverage=1 00:21:34.363 --rc genhtml_legend=1 00:21:34.363 --rc geninfo_all_blocks=1 00:21:34.363 --rc geninfo_unexecuted_blocks=1 00:21:34.363 00:21:34.363 ' 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:34.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.363 --rc genhtml_branch_coverage=1 00:21:34.363 --rc genhtml_function_coverage=1 00:21:34.363 --rc genhtml_legend=1 00:21:34.363 --rc geninfo_all_blocks=1 00:21:34.363 --rc geninfo_unexecuted_blocks=1 00:21:34.363 00:21:34.363 ' 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:34.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.363 --rc genhtml_branch_coverage=1 00:21:34.363 --rc genhtml_function_coverage=1 00:21:34.363 --rc genhtml_legend=1 00:21:34.363 --rc geninfo_all_blocks=1 00:21:34.363 --rc geninfo_unexecuted_blocks=1 00:21:34.363 00:21:34.363 ' 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.363 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:21:34.364 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:34.364 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:34.364 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:34.364 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:34.364 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:34.364 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:34.364 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:34.364 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:34.364 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:34.364 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:34.364 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:34.364 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:34.364 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:34.364 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:34.364 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:34.364 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:34.364 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:34.364 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:34.364 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:34.364 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:34.364 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:34.364 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:34.364 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.364 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:34.364 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.364 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:34.364 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:34.364 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:21:34.364 07:17:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.936 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:40.936 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:21:40.936 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:40.936 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:40.936 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:40.936 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:40.936 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:40.936 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:21:40.936 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:40.936 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:21:40.936 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:21:40.936 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:21:40.936 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:21:40.936 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:21:40.936 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:21:40.936 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:40.936 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:40.936 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:40.936 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:40.936 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:40.936 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:40.936 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:40.936 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:40.936 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:40.936 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:40.936 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:40.936 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:40.936 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:40.937 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:40.937 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:40.937 Found net devices under 0000:86:00.0: cvl_0_0 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:40.937 Found net devices under 0000:86:00.1: cvl_0_1 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:40.937 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:40.937 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.405 ms 00:21:40.937 00:21:40.937 --- 10.0.0.2 ping statistics --- 00:21:40.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.937 rtt min/avg/max/mdev = 0.405/0.405/0.405/0.000 ms 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:40.937 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:40.937 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:21:40.937 00:21:40.937 --- 10.0.0.1 ping statistics --- 00:21:40.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.937 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:40.937 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.938 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=1260742 00:21:40.938 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:40.938 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 1260742 00:21:40.938 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 1260742 ']' 00:21:40.938 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.938 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:40.938 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.938 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:40.938 07:17:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.938 [2024-11-20 07:17:44.822293] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:21:40.938 [2024-11-20 07:17:44.822343] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:40.938 [2024-11-20 07:17:44.903520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:40.938 [2024-11-20 07:17:44.944749] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:40.938 [2024-11-20 07:17:44.944790] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:40.938 [2024-11-20 07:17:44.944800] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:40.938 [2024-11-20 07:17:44.944807] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:40.938 [2024-11-20 07:17:44.944812] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:40.938 [2024-11-20 07:17:44.946286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:40.938 [2024-11-20 07:17:44.946374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:40.938 [2024-11-20 07:17:44.946376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.938 [2024-11-20 07:17:45.096619] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.938 Malloc0 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.938 [2024-11-20 07:17:45.155550] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.938 [2024-11-20 07:17:45.163474] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.938 Malloc1 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1260966 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1260966 /var/tmp/bdevperf.sock 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 1260966 ']' 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:40.938 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:40.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:40.939 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:40.939 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.197 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:41.197 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:21:41.197 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:41.197 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.197 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.197 NVMe0n1 00:21:41.197 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.197 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:41.197 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:41.197 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.197 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.197 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.197 1 00:21:41.197 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:41.197 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:21:41.197 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:41.197 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:41.197 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:41.197 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:41.197 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:41.197 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:41.197 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.197 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.456 request: 00:21:41.456 { 00:21:41.456 "name": "NVMe0", 00:21:41.456 "trtype": "tcp", 00:21:41.456 "traddr": "10.0.0.2", 00:21:41.456 "adrfam": "ipv4", 00:21:41.456 "trsvcid": "4420", 00:21:41.456 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:41.456 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:41.456 "hostaddr": "10.0.0.1", 00:21:41.456 "prchk_reftag": false, 00:21:41.456 "prchk_guard": false, 00:21:41.456 "hdgst": false, 00:21:41.456 "ddgst": false, 00:21:41.456 "allow_unrecognized_csi": false, 00:21:41.456 "method": "bdev_nvme_attach_controller", 00:21:41.456 "req_id": 1 00:21:41.456 } 00:21:41.456 Got JSON-RPC error response 00:21:41.456 response: 00:21:41.456 { 00:21:41.456 "code": -114, 00:21:41.456 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:41.456 } 00:21:41.456 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:41.456 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:21:41.456 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:41.456 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:41.456 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:41.456 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:41.456 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:21:41.456 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:41.456 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:41.456 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:41.456 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:41.456 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:41.456 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:41.456 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.456 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.456 request: 00:21:41.456 { 00:21:41.456 "name": "NVMe0", 00:21:41.456 "trtype": "tcp", 00:21:41.456 "traddr": "10.0.0.2", 00:21:41.456 "adrfam": "ipv4", 00:21:41.456 "trsvcid": "4420", 00:21:41.456 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:41.456 "hostaddr": "10.0.0.1", 00:21:41.456 "prchk_reftag": false, 00:21:41.456 "prchk_guard": false, 00:21:41.456 "hdgst": false, 00:21:41.456 "ddgst": false, 00:21:41.456 "allow_unrecognized_csi": false, 00:21:41.457 "method": "bdev_nvme_attach_controller", 00:21:41.457 "req_id": 1 00:21:41.457 } 00:21:41.457 Got JSON-RPC error response 00:21:41.457 response: 00:21:41.457 { 00:21:41.457 "code": -114, 00:21:41.457 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:41.457 } 00:21:41.457 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:41.457 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:21:41.457 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:41.457 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:41.457 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:41.457 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:41.457 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:21:41.457 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:41.457 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:41.457 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:41.457 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:41.457 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:41.457 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:41.457 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.457 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.457 request: 00:21:41.457 { 00:21:41.457 "name": "NVMe0", 00:21:41.457 "trtype": "tcp", 00:21:41.457 "traddr": "10.0.0.2", 00:21:41.457 "adrfam": "ipv4", 00:21:41.457 "trsvcid": "4420", 00:21:41.457 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:41.457 "hostaddr": "10.0.0.1", 00:21:41.457 "prchk_reftag": false, 00:21:41.457 "prchk_guard": false, 00:21:41.457 "hdgst": false, 00:21:41.457 "ddgst": false, 00:21:41.457 "multipath": "disable", 00:21:41.457 "allow_unrecognized_csi": false, 00:21:41.457 "method": "bdev_nvme_attach_controller", 00:21:41.457 "req_id": 1 00:21:41.457 } 00:21:41.457 Got JSON-RPC error response 00:21:41.457 response: 00:21:41.457 { 00:21:41.457 "code": -114, 00:21:41.457 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:21:41.457 } 00:21:41.457 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:41.457 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:21:41.457 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:41.457 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:41.457 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:41.457 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:41.457 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:21:41.457 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:41.457 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:41.457 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:41.457 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:41.457 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:41.457 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:41.457 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.457 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.457 request: 00:21:41.457 { 00:21:41.457 "name": "NVMe0", 00:21:41.457 "trtype": "tcp", 00:21:41.457 "traddr": "10.0.0.2", 00:21:41.457 "adrfam": "ipv4", 00:21:41.457 "trsvcid": "4420", 00:21:41.457 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:41.457 "hostaddr": "10.0.0.1", 00:21:41.457 "prchk_reftag": false, 00:21:41.457 "prchk_guard": false, 00:21:41.457 "hdgst": false, 00:21:41.457 "ddgst": false, 00:21:41.457 "multipath": "failover", 00:21:41.457 "allow_unrecognized_csi": false, 00:21:41.457 "method": "bdev_nvme_attach_controller", 00:21:41.457 "req_id": 1 00:21:41.457 } 00:21:41.457 Got JSON-RPC error response 00:21:41.457 response: 00:21:41.457 { 00:21:41.457 "code": -114, 00:21:41.457 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:41.457 } 00:21:41.457 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:41.457 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:21:41.457 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:41.457 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:41.457 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:41.457 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:41.457 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.457 07:17:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.716 NVMe0n1 00:21:41.716 07:17:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.716 07:17:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:41.716 07:17:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.716 07:17:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.716 07:17:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.716 07:17:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:41.716 07:17:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.716 07:17:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.975 00:21:41.975 07:17:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.975 07:17:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:41.975 07:17:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:41.975 07:17:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.975 07:17:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:41.975 07:17:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.975 07:17:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:41.975 07:17:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:42.986 { 00:21:42.986 "results": [ 00:21:42.986 { 00:21:42.986 "job": "NVMe0n1", 00:21:42.986 "core_mask": "0x1", 00:21:42.986 "workload": "write", 00:21:42.986 "status": "finished", 00:21:42.986 "queue_depth": 128, 00:21:42.986 "io_size": 4096, 00:21:42.986 "runtime": 1.003119, 00:21:42.986 "iops": 24514.539152383717, 00:21:42.986 "mibps": 95.7599185639989, 00:21:42.986 "io_failed": 0, 00:21:42.986 "io_timeout": 0, 00:21:42.986 "avg_latency_us": 5214.584226891068, 00:21:42.986 "min_latency_us": 3162.824347826087, 00:21:42.986 "max_latency_us": 11511.540869565217 00:21:42.986 } 00:21:42.986 ], 00:21:42.986 "core_count": 1 00:21:42.986 } 00:21:42.986 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:42.986 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.986 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:42.986 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.986 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:21:42.986 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1260966 00:21:42.986 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 1260966 ']' 00:21:42.986 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 1260966 00:21:42.986 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:21:42.986 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:42.986 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1260966 00:21:42.986 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:42.986 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:42.986 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1260966' 00:21:42.986 killing process with pid 1260966 00:21:42.986 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 1260966 00:21:42.986 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 1260966 00:21:43.245 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:43.245 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.245 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.245 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.245 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:43.245 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.245 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.245 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.246 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:21:43.246 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:43.246 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:21:43.246 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:43.246 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:21:43.246 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:21:43.246 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:43.246 [2024-11-20 07:17:45.270006] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:21:43.246 [2024-11-20 07:17:45.270053] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1260966 ] 00:21:43.246 [2024-11-20 07:17:45.346657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.246 [2024-11-20 07:17:45.387974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.246 [2024-11-20 07:17:46.282492] bdev.c:4756:bdev_name_add: *ERROR*: Bdev name ebf9df52-95d6-4b43-ae74-b08dc829643d already exists 00:21:43.246 [2024-11-20 07:17:46.282523] bdev.c:7965:bdev_register: *ERROR*: Unable to add uuid:ebf9df52-95d6-4b43-ae74-b08dc829643d alias for bdev NVMe1n1 00:21:43.246 [2024-11-20 07:17:46.282531] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:43.246 Running I/O for 1 seconds... 00:21:43.246 24463.00 IOPS, 95.56 MiB/s 00:21:43.246 Latency(us) 00:21:43.246 [2024-11-20T06:17:47.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.246 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:43.246 NVMe0n1 : 1.00 24514.54 95.76 0.00 0.00 5214.58 3162.82 11511.54 00:21:43.246 [2024-11-20T06:17:47.802Z] =================================================================================================================== 00:21:43.246 [2024-11-20T06:17:47.802Z] Total : 24514.54 95.76 0.00 0.00 5214.58 3162.82 11511.54 00:21:43.246 Received shutdown signal, test time was about 1.000000 seconds 00:21:43.246 00:21:43.246 Latency(us) 00:21:43.246 [2024-11-20T06:17:47.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.246 [2024-11-20T06:17:47.802Z] =================================================================================================================== 00:21:43.246 [2024-11-20T06:17:47.802Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:43.246 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:43.246 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:43.246 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:21:43.246 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:21:43.246 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:43.246 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:21:43.246 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:43.246 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:21:43.246 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:43.246 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:43.246 rmmod nvme_tcp 00:21:43.246 rmmod nvme_fabrics 00:21:43.246 rmmod nvme_keyring 00:21:43.246 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:43.246 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:21:43.246 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:21:43.246 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 1260742 ']' 00:21:43.246 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 1260742 00:21:43.246 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 1260742 ']' 00:21:43.246 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 1260742 00:21:43.246 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:21:43.246 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:43.246 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1260742 00:21:43.505 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:43.505 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:43.505 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1260742' 00:21:43.505 killing process with pid 1260742 00:21:43.505 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 1260742 00:21:43.505 07:17:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 1260742 00:21:43.505 07:17:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:43.505 07:17:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:43.505 07:17:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:43.505 07:17:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:21:43.505 07:17:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:21:43.505 07:17:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:43.505 07:17:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:21:43.505 07:17:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:43.505 07:17:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:43.505 07:17:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.505 07:17:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:43.505 07:17:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.041 07:17:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:46.041 00:21:46.041 real 0m11.450s 00:21:46.041 user 0m13.437s 00:21:46.041 sys 0m5.215s 00:21:46.041 07:17:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:46.041 07:17:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:46.041 ************************************ 00:21:46.041 END TEST nvmf_multicontroller 00:21:46.041 ************************************ 00:21:46.041 07:17:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:46.041 07:17:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:46.041 07:17:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:46.041 07:17:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:46.041 ************************************ 00:21:46.041 START TEST nvmf_aer 00:21:46.041 ************************************ 00:21:46.041 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:46.041 * Looking for test storage... 00:21:46.041 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:46.041 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:46.041 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:21:46.041 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:46.041 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:46.041 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:46.041 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:46.041 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:46.041 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:21:46.041 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:21:46.041 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:21:46.041 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:46.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.042 --rc genhtml_branch_coverage=1 00:21:46.042 --rc genhtml_function_coverage=1 00:21:46.042 --rc genhtml_legend=1 00:21:46.042 --rc geninfo_all_blocks=1 00:21:46.042 --rc geninfo_unexecuted_blocks=1 00:21:46.042 00:21:46.042 ' 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:46.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.042 --rc genhtml_branch_coverage=1 00:21:46.042 --rc genhtml_function_coverage=1 00:21:46.042 --rc genhtml_legend=1 00:21:46.042 --rc geninfo_all_blocks=1 00:21:46.042 --rc geninfo_unexecuted_blocks=1 00:21:46.042 00:21:46.042 ' 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:46.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.042 --rc genhtml_branch_coverage=1 00:21:46.042 --rc genhtml_function_coverage=1 00:21:46.042 --rc genhtml_legend=1 00:21:46.042 --rc geninfo_all_blocks=1 00:21:46.042 --rc geninfo_unexecuted_blocks=1 00:21:46.042 00:21:46.042 ' 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:46.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.042 --rc genhtml_branch_coverage=1 00:21:46.042 --rc genhtml_function_coverage=1 00:21:46.042 --rc genhtml_legend=1 00:21:46.042 --rc geninfo_all_blocks=1 00:21:46.042 --rc geninfo_unexecuted_blocks=1 00:21:46.042 00:21:46.042 ' 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:46.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:46.042 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:46.043 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:46.043 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:46.043 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:46.043 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:46.043 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.043 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:46.043 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.043 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:46.043 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:46.043 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:21:46.043 07:17:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.612 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:52.612 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:21:52.612 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:52.612 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:52.612 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:52.612 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:52.612 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:52.612 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:21:52.612 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:52.612 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:21:52.612 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:52.613 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:52.613 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:52.613 Found net devices under 0000:86:00.0: cvl_0_0 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:52.613 Found net devices under 0000:86:00.1: cvl_0_1 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:52.613 07:17:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:52.613 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:52.613 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:52.613 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:52.613 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:52.613 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:52.613 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:52.613 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:52.613 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:52.613 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:52.613 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:52.613 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:52.613 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:52.613 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:52.613 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:52.613 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:52.613 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:52.613 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:52.613 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:52.613 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:52.614 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:52.614 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.465 ms 00:21:52.614 00:21:52.614 --- 10.0.0.2 ping statistics --- 00:21:52.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.614 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:52.614 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:52.614 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:21:52.614 00:21:52.614 --- 10.0.0.1 ping statistics --- 00:21:52.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.614 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1264758 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1264758 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # '[' -z 1264758 ']' 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.614 [2024-11-20 07:17:56.335177] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:21:52.614 [2024-11-20 07:17:56.335226] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:52.614 [2024-11-20 07:17:56.413910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:52.614 [2024-11-20 07:17:56.457571] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:52.614 [2024-11-20 07:17:56.457609] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:52.614 [2024-11-20 07:17:56.457616] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:52.614 [2024-11-20 07:17:56.457624] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:52.614 [2024-11-20 07:17:56.457629] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:52.614 [2024-11-20 07:17:56.459108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:52.614 [2024-11-20 07:17:56.459225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:52.614 [2024-11-20 07:17:56.459312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.614 [2024-11-20 07:17:56.459313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@866 -- # return 0 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.614 [2024-11-20 07:17:56.598655] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.614 Malloc0 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.614 [2024-11-20 07:17:56.665369] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.614 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.614 [ 00:21:52.614 { 00:21:52.614 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:52.614 "subtype": "Discovery", 00:21:52.614 "listen_addresses": [], 00:21:52.614 "allow_any_host": true, 00:21:52.614 "hosts": [] 00:21:52.614 }, 00:21:52.614 { 00:21:52.614 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:52.614 "subtype": "NVMe", 00:21:52.614 "listen_addresses": [ 00:21:52.614 { 00:21:52.614 "trtype": "TCP", 00:21:52.614 "adrfam": "IPv4", 00:21:52.614 "traddr": "10.0.0.2", 00:21:52.614 "trsvcid": "4420" 00:21:52.614 } 00:21:52.614 ], 00:21:52.614 "allow_any_host": true, 00:21:52.614 "hosts": [], 00:21:52.614 "serial_number": "SPDK00000000000001", 00:21:52.614 "model_number": "SPDK bdev Controller", 00:21:52.614 "max_namespaces": 2, 00:21:52.614 "min_cntlid": 1, 00:21:52.614 "max_cntlid": 65519, 00:21:52.614 "namespaces": [ 00:21:52.614 { 00:21:52.614 "nsid": 1, 00:21:52.614 "bdev_name": "Malloc0", 00:21:52.614 "name": "Malloc0", 00:21:52.614 "nguid": "FD850D09FB1C496184972DA23E62EAA9", 00:21:52.614 "uuid": "fd850d09-fb1c-4961-8497-2da23e62eaa9" 00:21:52.614 } 00:21:52.614 ] 00:21:52.614 } 00:21:52.614 ] 00:21:52.615 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.615 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:52.615 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:52.615 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1264986 00:21:52.615 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:52.615 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:52.615 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # local i=0 00:21:52.615 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:52.615 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:21:52.615 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=1 00:21:52.615 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:21:52.615 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:52.615 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:21:52.615 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=2 00:21:52.615 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:21:52.615 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:52.615 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:52.615 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1278 -- # return 0 00:21:52.615 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:52.615 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.615 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.615 Malloc1 00:21:52.615 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.615 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:52.615 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.615 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.615 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.615 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:52.615 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.615 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.615 Asynchronous Event Request test 00:21:52.615 Attaching to 10.0.0.2 00:21:52.615 Attached to 10.0.0.2 00:21:52.615 Registering asynchronous event callbacks... 00:21:52.615 Starting namespace attribute notice tests for all controllers... 00:21:52.615 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:52.615 aer_cb - Changed Namespace 00:21:52.615 Cleaning up... 00:21:52.615 [ 00:21:52.615 { 00:21:52.615 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:52.615 "subtype": "Discovery", 00:21:52.615 "listen_addresses": [], 00:21:52.615 "allow_any_host": true, 00:21:52.615 "hosts": [] 00:21:52.615 }, 00:21:52.615 { 00:21:52.615 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:52.615 "subtype": "NVMe", 00:21:52.615 "listen_addresses": [ 00:21:52.615 { 00:21:52.615 "trtype": "TCP", 00:21:52.615 "adrfam": "IPv4", 00:21:52.615 "traddr": "10.0.0.2", 00:21:52.615 "trsvcid": "4420" 00:21:52.615 } 00:21:52.615 ], 00:21:52.615 "allow_any_host": true, 00:21:52.615 "hosts": [], 00:21:52.615 "serial_number": "SPDK00000000000001", 00:21:52.615 "model_number": "SPDK bdev Controller", 00:21:52.615 "max_namespaces": 2, 00:21:52.615 "min_cntlid": 1, 00:21:52.615 "max_cntlid": 65519, 00:21:52.615 "namespaces": [ 00:21:52.615 { 00:21:52.615 "nsid": 1, 00:21:52.615 "bdev_name": "Malloc0", 00:21:52.615 "name": "Malloc0", 00:21:52.615 "nguid": "FD850D09FB1C496184972DA23E62EAA9", 00:21:52.615 "uuid": "fd850d09-fb1c-4961-8497-2da23e62eaa9" 00:21:52.615 }, 00:21:52.615 { 00:21:52.615 "nsid": 2, 00:21:52.615 "bdev_name": "Malloc1", 00:21:52.615 "name": "Malloc1", 00:21:52.615 "nguid": "729D1F4CDAA34B73AA667C4F7F921CDA", 00:21:52.615 "uuid": "729d1f4c-daa3-4b73-aa66-7c4f7f921cda" 00:21:52.615 } 00:21:52.615 ] 00:21:52.615 } 00:21:52.615 ] 00:21:52.615 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.615 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1264986 00:21:52.615 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:52.615 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.615 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.615 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.615 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:52.615 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.615 07:17:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.615 07:17:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.615 07:17:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:52.615 07:17:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.615 07:17:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:52.615 07:17:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.615 07:17:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:52.615 07:17:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:52.615 07:17:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:52.615 07:17:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:21:52.615 07:17:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:52.615 07:17:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:21:52.615 07:17:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:52.615 07:17:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:52.615 rmmod nvme_tcp 00:21:52.615 rmmod nvme_fabrics 00:21:52.615 rmmod nvme_keyring 00:21:52.615 07:17:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:52.615 07:17:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:21:52.615 07:17:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:21:52.615 07:17:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1264758 ']' 00:21:52.615 07:17:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1264758 00:21:52.615 07:17:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # '[' -z 1264758 ']' 00:21:52.615 07:17:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # kill -0 1264758 00:21:52.615 07:17:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # uname 00:21:52.615 07:17:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:52.615 07:17:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1264758 00:21:52.615 07:17:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:52.615 07:17:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:52.615 07:17:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1264758' 00:21:52.615 killing process with pid 1264758 00:21:52.615 07:17:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@971 -- # kill 1264758 00:21:52.615 07:17:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@976 -- # wait 1264758 00:21:52.875 07:17:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:52.875 07:17:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:52.875 07:17:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:52.875 07:17:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:21:52.875 07:17:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:21:52.875 07:17:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:52.875 07:17:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:21:52.875 07:17:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:52.875 07:17:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:52.875 07:17:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.875 07:17:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:52.875 07:17:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:55.436 00:21:55.436 real 0m9.221s 00:21:55.436 user 0m5.090s 00:21:55.436 sys 0m4.882s 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:55.436 ************************************ 00:21:55.436 END TEST nvmf_aer 00:21:55.436 ************************************ 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.436 ************************************ 00:21:55.436 START TEST nvmf_async_init 00:21:55.436 ************************************ 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:55.436 * Looking for test storage... 00:21:55.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:55.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.436 --rc genhtml_branch_coverage=1 00:21:55.436 --rc genhtml_function_coverage=1 00:21:55.436 --rc genhtml_legend=1 00:21:55.436 --rc geninfo_all_blocks=1 00:21:55.436 --rc geninfo_unexecuted_blocks=1 00:21:55.436 00:21:55.436 ' 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:55.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.436 --rc genhtml_branch_coverage=1 00:21:55.436 --rc genhtml_function_coverage=1 00:21:55.436 --rc genhtml_legend=1 00:21:55.436 --rc geninfo_all_blocks=1 00:21:55.436 --rc geninfo_unexecuted_blocks=1 00:21:55.436 00:21:55.436 ' 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:55.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.436 --rc genhtml_branch_coverage=1 00:21:55.436 --rc genhtml_function_coverage=1 00:21:55.436 --rc genhtml_legend=1 00:21:55.436 --rc geninfo_all_blocks=1 00:21:55.436 --rc geninfo_unexecuted_blocks=1 00:21:55.436 00:21:55.436 ' 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:55.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.436 --rc genhtml_branch_coverage=1 00:21:55.436 --rc genhtml_function_coverage=1 00:21:55.436 --rc genhtml_legend=1 00:21:55.436 --rc geninfo_all_blocks=1 00:21:55.436 --rc geninfo_unexecuted_blocks=1 00:21:55.436 00:21:55.436 ' 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:55.436 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:55.437 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=a29c787487dd4fef9b944599b1d0f1b3 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:21:55.437 07:17:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:02.035 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:02.035 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:02.035 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:02.036 Found net devices under 0000:86:00.0: cvl_0_0 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:02.036 Found net devices under 0000:86:00.1: cvl_0_1 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:02.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:02.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:22:02.036 00:22:02.036 --- 10.0.0.2 ping statistics --- 00:22:02.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.036 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:02.036 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:02.036 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:22:02.036 00:22:02.036 --- 10.0.0.1 ping statistics --- 00:22:02.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.036 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1268642 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1268642 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # '[' -z 1268642 ']' 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:02.036 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.036 [2024-11-20 07:18:05.665696] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:22:02.036 [2024-11-20 07:18:05.665740] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:02.036 [2024-11-20 07:18:05.745446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.036 [2024-11-20 07:18:05.786276] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:02.036 [2024-11-20 07:18:05.786313] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:02.036 [2024-11-20 07:18:05.786320] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:02.036 [2024-11-20 07:18:05.786326] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:02.037 [2024-11-20 07:18:05.786331] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:02.037 [2024-11-20 07:18:05.786906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.037 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:02.037 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@866 -- # return 0 00:22:02.037 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:02.037 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:02.037 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.037 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:02.037 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:02.037 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.037 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.037 [2024-11-20 07:18:05.923960] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:02.037 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.037 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:02.037 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.037 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.037 null0 00:22:02.037 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.037 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:02.037 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.037 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.037 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.037 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:02.037 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.037 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.037 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.037 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g a29c787487dd4fef9b944599b1d0f1b3 00:22:02.037 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.037 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.037 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.037 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:02.037 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.037 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.037 [2024-11-20 07:18:05.976220] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:02.037 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.037 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:02.037 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.037 07:18:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.037 nvme0n1 00:22:02.037 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.037 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:02.037 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.037 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.037 [ 00:22:02.037 { 00:22:02.037 "name": "nvme0n1", 00:22:02.037 "aliases": [ 00:22:02.037 "a29c7874-87dd-4fef-9b94-4599b1d0f1b3" 00:22:02.037 ], 00:22:02.037 "product_name": "NVMe disk", 00:22:02.037 "block_size": 512, 00:22:02.037 "num_blocks": 2097152, 00:22:02.037 "uuid": "a29c7874-87dd-4fef-9b94-4599b1d0f1b3", 00:22:02.037 "numa_id": 1, 00:22:02.037 "assigned_rate_limits": { 00:22:02.037 "rw_ios_per_sec": 0, 00:22:02.037 "rw_mbytes_per_sec": 0, 00:22:02.037 "r_mbytes_per_sec": 0, 00:22:02.037 "w_mbytes_per_sec": 0 00:22:02.037 }, 00:22:02.037 "claimed": false, 00:22:02.037 "zoned": false, 00:22:02.037 "supported_io_types": { 00:22:02.037 "read": true, 00:22:02.037 "write": true, 00:22:02.037 "unmap": false, 00:22:02.037 "flush": true, 00:22:02.037 "reset": true, 00:22:02.037 "nvme_admin": true, 00:22:02.037 "nvme_io": true, 00:22:02.037 "nvme_io_md": false, 00:22:02.037 "write_zeroes": true, 00:22:02.037 "zcopy": false, 00:22:02.037 "get_zone_info": false, 00:22:02.037 "zone_management": false, 00:22:02.037 "zone_append": false, 00:22:02.037 "compare": true, 00:22:02.037 "compare_and_write": true, 00:22:02.037 "abort": true, 00:22:02.037 "seek_hole": false, 00:22:02.037 "seek_data": false, 00:22:02.037 "copy": true, 00:22:02.037 "nvme_iov_md": false 00:22:02.037 }, 00:22:02.037 "memory_domains": [ 00:22:02.037 { 00:22:02.037 "dma_device_id": "system", 00:22:02.037 "dma_device_type": 1 00:22:02.037 } 00:22:02.037 ], 00:22:02.037 "driver_specific": { 00:22:02.037 "nvme": [ 00:22:02.037 { 00:22:02.037 "trid": { 00:22:02.037 "trtype": "TCP", 00:22:02.037 "adrfam": "IPv4", 00:22:02.037 "traddr": "10.0.0.2", 00:22:02.037 "trsvcid": "4420", 00:22:02.037 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:02.037 }, 00:22:02.037 "ctrlr_data": { 00:22:02.037 "cntlid": 1, 00:22:02.037 "vendor_id": "0x8086", 00:22:02.037 "model_number": "SPDK bdev Controller", 00:22:02.037 "serial_number": "00000000000000000000", 00:22:02.037 "firmware_revision": "25.01", 00:22:02.037 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:02.037 "oacs": { 00:22:02.037 "security": 0, 00:22:02.037 "format": 0, 00:22:02.037 "firmware": 0, 00:22:02.037 "ns_manage": 0 00:22:02.037 }, 00:22:02.037 "multi_ctrlr": true, 00:22:02.037 "ana_reporting": false 00:22:02.037 }, 00:22:02.037 "vs": { 00:22:02.037 "nvme_version": "1.3" 00:22:02.037 }, 00:22:02.037 "ns_data": { 00:22:02.037 "id": 1, 00:22:02.037 "can_share": true 00:22:02.037 } 00:22:02.037 } 00:22:02.037 ], 00:22:02.037 "mp_policy": "active_passive" 00:22:02.037 } 00:22:02.037 } 00:22:02.037 ] 00:22:02.037 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.037 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:02.037 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.037 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.038 [2024-11-20 07:18:06.240753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:02.038 [2024-11-20 07:18:06.240808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17498e0 (9): Bad file descriptor 00:22:02.038 [2024-11-20 07:18:06.373033] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:22:02.038 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.038 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:02.038 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.038 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.038 [ 00:22:02.038 { 00:22:02.038 "name": "nvme0n1", 00:22:02.038 "aliases": [ 00:22:02.038 "a29c7874-87dd-4fef-9b94-4599b1d0f1b3" 00:22:02.038 ], 00:22:02.038 "product_name": "NVMe disk", 00:22:02.038 "block_size": 512, 00:22:02.038 "num_blocks": 2097152, 00:22:02.038 "uuid": "a29c7874-87dd-4fef-9b94-4599b1d0f1b3", 00:22:02.038 "numa_id": 1, 00:22:02.038 "assigned_rate_limits": { 00:22:02.038 "rw_ios_per_sec": 0, 00:22:02.038 "rw_mbytes_per_sec": 0, 00:22:02.038 "r_mbytes_per_sec": 0, 00:22:02.038 "w_mbytes_per_sec": 0 00:22:02.038 }, 00:22:02.038 "claimed": false, 00:22:02.038 "zoned": false, 00:22:02.038 "supported_io_types": { 00:22:02.038 "read": true, 00:22:02.038 "write": true, 00:22:02.038 "unmap": false, 00:22:02.038 "flush": true, 00:22:02.038 "reset": true, 00:22:02.038 "nvme_admin": true, 00:22:02.038 "nvme_io": true, 00:22:02.038 "nvme_io_md": false, 00:22:02.038 "write_zeroes": true, 00:22:02.038 "zcopy": false, 00:22:02.038 "get_zone_info": false, 00:22:02.038 "zone_management": false, 00:22:02.038 "zone_append": false, 00:22:02.038 "compare": true, 00:22:02.038 "compare_and_write": true, 00:22:02.038 "abort": true, 00:22:02.038 "seek_hole": false, 00:22:02.038 "seek_data": false, 00:22:02.038 "copy": true, 00:22:02.038 "nvme_iov_md": false 00:22:02.038 }, 00:22:02.038 "memory_domains": [ 00:22:02.038 { 00:22:02.038 "dma_device_id": "system", 00:22:02.038 "dma_device_type": 1 00:22:02.038 } 00:22:02.038 ], 00:22:02.038 "driver_specific": { 00:22:02.038 "nvme": [ 00:22:02.038 { 00:22:02.038 "trid": { 00:22:02.038 "trtype": "TCP", 00:22:02.038 "adrfam": "IPv4", 00:22:02.038 "traddr": "10.0.0.2", 00:22:02.038 "trsvcid": "4420", 00:22:02.038 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:02.038 }, 00:22:02.038 "ctrlr_data": { 00:22:02.038 "cntlid": 2, 00:22:02.038 "vendor_id": "0x8086", 00:22:02.038 "model_number": "SPDK bdev Controller", 00:22:02.038 "serial_number": "00000000000000000000", 00:22:02.038 "firmware_revision": "25.01", 00:22:02.038 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:02.038 "oacs": { 00:22:02.038 "security": 0, 00:22:02.038 "format": 0, 00:22:02.038 "firmware": 0, 00:22:02.038 "ns_manage": 0 00:22:02.038 }, 00:22:02.038 "multi_ctrlr": true, 00:22:02.038 "ana_reporting": false 00:22:02.038 }, 00:22:02.038 "vs": { 00:22:02.038 "nvme_version": "1.3" 00:22:02.038 }, 00:22:02.038 "ns_data": { 00:22:02.038 "id": 1, 00:22:02.038 "can_share": true 00:22:02.038 } 00:22:02.038 } 00:22:02.038 ], 00:22:02.038 "mp_policy": "active_passive" 00:22:02.038 } 00:22:02.038 } 00:22:02.038 ] 00:22:02.038 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.038 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:02.038 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.038 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.038 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.038 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:02.038 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.xoHPHrqXBH 00:22:02.038 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:02.038 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.xoHPHrqXBH 00:22:02.038 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.xoHPHrqXBH 00:22:02.038 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.038 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.038 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.038 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:02.038 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.038 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.038 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.038 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:02.038 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.038 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.038 [2024-11-20 07:18:06.445383] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:02.038 [2024-11-20 07:18:06.445473] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:02.038 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.038 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:22:02.038 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.038 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.038 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.038 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:02.038 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.038 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.038 [2024-11-20 07:18:06.465444] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:02.038 nvme0n1 00:22:02.039 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.039 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:02.039 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.039 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.039 [ 00:22:02.039 { 00:22:02.039 "name": "nvme0n1", 00:22:02.039 "aliases": [ 00:22:02.039 "a29c7874-87dd-4fef-9b94-4599b1d0f1b3" 00:22:02.039 ], 00:22:02.039 "product_name": "NVMe disk", 00:22:02.039 "block_size": 512, 00:22:02.039 "num_blocks": 2097152, 00:22:02.039 "uuid": "a29c7874-87dd-4fef-9b94-4599b1d0f1b3", 00:22:02.039 "numa_id": 1, 00:22:02.039 "assigned_rate_limits": { 00:22:02.039 "rw_ios_per_sec": 0, 00:22:02.039 "rw_mbytes_per_sec": 0, 00:22:02.039 "r_mbytes_per_sec": 0, 00:22:02.039 "w_mbytes_per_sec": 0 00:22:02.039 }, 00:22:02.039 "claimed": false, 00:22:02.039 "zoned": false, 00:22:02.039 "supported_io_types": { 00:22:02.039 "read": true, 00:22:02.039 "write": true, 00:22:02.039 "unmap": false, 00:22:02.039 "flush": true, 00:22:02.039 "reset": true, 00:22:02.039 "nvme_admin": true, 00:22:02.039 "nvme_io": true, 00:22:02.039 "nvme_io_md": false, 00:22:02.039 "write_zeroes": true, 00:22:02.039 "zcopy": false, 00:22:02.039 "get_zone_info": false, 00:22:02.039 "zone_management": false, 00:22:02.039 "zone_append": false, 00:22:02.039 "compare": true, 00:22:02.039 "compare_and_write": true, 00:22:02.039 "abort": true, 00:22:02.039 "seek_hole": false, 00:22:02.039 "seek_data": false, 00:22:02.039 "copy": true, 00:22:02.039 "nvme_iov_md": false 00:22:02.039 }, 00:22:02.039 "memory_domains": [ 00:22:02.039 { 00:22:02.039 "dma_device_id": "system", 00:22:02.039 "dma_device_type": 1 00:22:02.039 } 00:22:02.039 ], 00:22:02.039 "driver_specific": { 00:22:02.039 "nvme": [ 00:22:02.039 { 00:22:02.039 "trid": { 00:22:02.039 "trtype": "TCP", 00:22:02.039 "adrfam": "IPv4", 00:22:02.039 "traddr": "10.0.0.2", 00:22:02.039 "trsvcid": "4421", 00:22:02.039 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:02.039 }, 00:22:02.039 "ctrlr_data": { 00:22:02.039 "cntlid": 3, 00:22:02.039 "vendor_id": "0x8086", 00:22:02.039 "model_number": "SPDK bdev Controller", 00:22:02.039 "serial_number": "00000000000000000000", 00:22:02.039 "firmware_revision": "25.01", 00:22:02.039 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:02.039 "oacs": { 00:22:02.039 "security": 0, 00:22:02.039 "format": 0, 00:22:02.039 "firmware": 0, 00:22:02.039 "ns_manage": 0 00:22:02.039 }, 00:22:02.039 "multi_ctrlr": true, 00:22:02.039 "ana_reporting": false 00:22:02.039 }, 00:22:02.039 "vs": { 00:22:02.039 "nvme_version": "1.3" 00:22:02.039 }, 00:22:02.039 "ns_data": { 00:22:02.039 "id": 1, 00:22:02.039 "can_share": true 00:22:02.039 } 00:22:02.039 } 00:22:02.039 ], 00:22:02.039 "mp_policy": "active_passive" 00:22:02.039 } 00:22:02.039 } 00:22:02.039 ] 00:22:02.039 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.039 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:02.039 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.039 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:02.039 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.039 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.xoHPHrqXBH 00:22:02.039 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:22:02.039 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:22:02.039 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:02.039 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:22:02.039 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:02.039 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:22:02.039 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:02.039 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:02.039 rmmod nvme_tcp 00:22:02.299 rmmod nvme_fabrics 00:22:02.299 rmmod nvme_keyring 00:22:02.299 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:02.299 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:22:02.299 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:22:02.299 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1268642 ']' 00:22:02.299 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1268642 00:22:02.299 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' -z 1268642 ']' 00:22:02.299 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # kill -0 1268642 00:22:02.299 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # uname 00:22:02.299 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:02.299 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1268642 00:22:02.299 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:02.299 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:02.299 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1268642' 00:22:02.299 killing process with pid 1268642 00:22:02.299 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@971 -- # kill 1268642 00:22:02.299 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@976 -- # wait 1268642 00:22:02.299 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:02.299 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:02.299 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:02.299 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:22:02.299 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:22:02.299 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:02.299 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:22:02.299 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:02.299 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:02.299 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.299 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:02.299 07:18:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:04.835 07:18:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:04.835 00:22:04.835 real 0m9.456s 00:22:04.835 user 0m3.067s 00:22:04.835 sys 0m4.822s 00:22:04.835 07:18:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:04.835 07:18:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:04.835 ************************************ 00:22:04.835 END TEST nvmf_async_init 00:22:04.835 ************************************ 00:22:04.835 07:18:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:04.835 07:18:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:04.835 07:18:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:04.835 07:18:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.835 ************************************ 00:22:04.835 START TEST dma 00:22:04.835 ************************************ 00:22:04.835 07:18:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:04.835 * Looking for test storage... 00:22:04.835 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:04.835 07:18:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:04.835 07:18:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:22:04.835 07:18:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:04.835 07:18:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:04.835 07:18:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:04.835 07:18:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:04.835 07:18:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:04.835 07:18:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:22:04.835 07:18:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:22:04.835 07:18:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:22:04.835 07:18:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:22:04.835 07:18:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:22:04.835 07:18:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:22:04.835 07:18:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:22:04.835 07:18:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:04.835 07:18:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:22:04.835 07:18:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:22:04.835 07:18:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:04.835 07:18:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:04.835 07:18:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:22:04.835 07:18:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:22:04.835 07:18:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:04.835 07:18:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:22:04.835 07:18:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:22:04.835 07:18:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:22:04.835 07:18:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:22:04.835 07:18:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:04.835 07:18:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:22:04.835 07:18:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:22:04.835 07:18:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:04.835 07:18:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:04.835 07:18:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:04.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.836 --rc genhtml_branch_coverage=1 00:22:04.836 --rc genhtml_function_coverage=1 00:22:04.836 --rc genhtml_legend=1 00:22:04.836 --rc geninfo_all_blocks=1 00:22:04.836 --rc geninfo_unexecuted_blocks=1 00:22:04.836 00:22:04.836 ' 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:04.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.836 --rc genhtml_branch_coverage=1 00:22:04.836 --rc genhtml_function_coverage=1 00:22:04.836 --rc genhtml_legend=1 00:22:04.836 --rc geninfo_all_blocks=1 00:22:04.836 --rc geninfo_unexecuted_blocks=1 00:22:04.836 00:22:04.836 ' 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:04.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.836 --rc genhtml_branch_coverage=1 00:22:04.836 --rc genhtml_function_coverage=1 00:22:04.836 --rc genhtml_legend=1 00:22:04.836 --rc geninfo_all_blocks=1 00:22:04.836 --rc geninfo_unexecuted_blocks=1 00:22:04.836 00:22:04.836 ' 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:04.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:04.836 --rc genhtml_branch_coverage=1 00:22:04.836 --rc genhtml_function_coverage=1 00:22:04.836 --rc genhtml_legend=1 00:22:04.836 --rc geninfo_all_blocks=1 00:22:04.836 --rc geninfo_unexecuted_blocks=1 00:22:04.836 00:22:04.836 ' 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:04.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:22:04.836 00:22:04.836 real 0m0.212s 00:22:04.836 user 0m0.135s 00:22:04.836 sys 0m0.090s 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:04.836 ************************************ 00:22:04.836 END TEST dma 00:22:04.836 ************************************ 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:04.836 ************************************ 00:22:04.836 START TEST nvmf_identify 00:22:04.836 ************************************ 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:04.836 * Looking for test storage... 00:22:04.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:22:04.836 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:05.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.096 --rc genhtml_branch_coverage=1 00:22:05.096 --rc genhtml_function_coverage=1 00:22:05.096 --rc genhtml_legend=1 00:22:05.096 --rc geninfo_all_blocks=1 00:22:05.096 --rc geninfo_unexecuted_blocks=1 00:22:05.096 00:22:05.096 ' 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:05.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.096 --rc genhtml_branch_coverage=1 00:22:05.096 --rc genhtml_function_coverage=1 00:22:05.096 --rc genhtml_legend=1 00:22:05.096 --rc geninfo_all_blocks=1 00:22:05.096 --rc geninfo_unexecuted_blocks=1 00:22:05.096 00:22:05.096 ' 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:05.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.096 --rc genhtml_branch_coverage=1 00:22:05.096 --rc genhtml_function_coverage=1 00:22:05.096 --rc genhtml_legend=1 00:22:05.096 --rc geninfo_all_blocks=1 00:22:05.096 --rc geninfo_unexecuted_blocks=1 00:22:05.096 00:22:05.096 ' 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:05.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.096 --rc genhtml_branch_coverage=1 00:22:05.096 --rc genhtml_function_coverage=1 00:22:05.096 --rc genhtml_legend=1 00:22:05.096 --rc geninfo_all_blocks=1 00:22:05.096 --rc geninfo_unexecuted_blocks=1 00:22:05.096 00:22:05.096 ' 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:05.096 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:05.097 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:05.097 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.097 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.097 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.097 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:05.097 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.097 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:22:05.097 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:05.097 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:05.097 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:05.097 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:05.097 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:05.097 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:05.097 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:05.097 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:05.097 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:05.097 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:05.097 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:05.097 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:05.097 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:05.097 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:05.097 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:05.097 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:05.097 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:05.097 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:05.097 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.097 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:05.097 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.097 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:05.097 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:05.097 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:22:05.097 07:18:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:11.678 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:11.678 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:11.678 Found net devices under 0000:86:00.0: cvl_0_0 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.678 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:11.679 Found net devices under 0000:86:00.1: cvl_0_1 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:11.679 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:11.679 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:22:11.679 00:22:11.679 --- 10.0.0.2 ping statistics --- 00:22:11.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.679 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:11.679 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:11.679 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:22:11.679 00:22:11.679 --- 10.0.0.1 ping statistics --- 00:22:11.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.679 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1272843 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1272843 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 1272843 ']' 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:11.679 [2024-11-20 07:18:15.417703] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:22:11.679 [2024-11-20 07:18:15.417748] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:11.679 [2024-11-20 07:18:15.495739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:11.679 [2024-11-20 07:18:15.539009] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:11.679 [2024-11-20 07:18:15.539048] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:11.679 [2024-11-20 07:18:15.539054] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:11.679 [2024-11-20 07:18:15.539061] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:11.679 [2024-11-20 07:18:15.539066] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:11.679 [2024-11-20 07:18:15.540601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:11.679 [2024-11-20 07:18:15.540712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:11.679 [2024-11-20 07:18:15.540822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.679 [2024-11-20 07:18:15.540824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:11.679 [2024-11-20 07:18:15.643732] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:11.679 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:11.680 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.680 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:11.680 Malloc0 00:22:11.680 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.680 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:11.680 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.680 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:11.680 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.680 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:11.680 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.680 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:11.680 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.680 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:11.680 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.680 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:11.680 [2024-11-20 07:18:15.740044] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:11.680 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.680 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:11.680 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.680 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:11.680 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.680 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:11.680 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.680 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:11.680 [ 00:22:11.680 { 00:22:11.680 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:11.680 "subtype": "Discovery", 00:22:11.680 "listen_addresses": [ 00:22:11.680 { 00:22:11.680 "trtype": "TCP", 00:22:11.680 "adrfam": "IPv4", 00:22:11.680 "traddr": "10.0.0.2", 00:22:11.680 "trsvcid": "4420" 00:22:11.680 } 00:22:11.680 ], 00:22:11.680 "allow_any_host": true, 00:22:11.680 "hosts": [] 00:22:11.680 }, 00:22:11.680 { 00:22:11.680 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:11.680 "subtype": "NVMe", 00:22:11.680 "listen_addresses": [ 00:22:11.680 { 00:22:11.680 "trtype": "TCP", 00:22:11.680 "adrfam": "IPv4", 00:22:11.680 "traddr": "10.0.0.2", 00:22:11.680 "trsvcid": "4420" 00:22:11.680 } 00:22:11.680 ], 00:22:11.680 "allow_any_host": true, 00:22:11.680 "hosts": [], 00:22:11.680 "serial_number": "SPDK00000000000001", 00:22:11.680 "model_number": "SPDK bdev Controller", 00:22:11.680 "max_namespaces": 32, 00:22:11.680 "min_cntlid": 1, 00:22:11.680 "max_cntlid": 65519, 00:22:11.680 "namespaces": [ 00:22:11.680 { 00:22:11.680 "nsid": 1, 00:22:11.680 "bdev_name": "Malloc0", 00:22:11.680 "name": "Malloc0", 00:22:11.680 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:11.680 "eui64": "ABCDEF0123456789", 00:22:11.680 "uuid": "cf11e2c8-92d0-46bd-afdb-9e6e0a6bd30a" 00:22:11.680 } 00:22:11.680 ] 00:22:11.680 } 00:22:11.680 ] 00:22:11.680 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.680 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:11.680 [2024-11-20 07:18:15.792180] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:22:11.680 [2024-11-20 07:18:15.792220] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1272877 ] 00:22:11.680 [2024-11-20 07:18:15.836971] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:22:11.680 [2024-11-20 07:18:15.837019] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:11.680 [2024-11-20 07:18:15.837024] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:11.680 [2024-11-20 07:18:15.837038] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:11.680 [2024-11-20 07:18:15.837047] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:11.680 [2024-11-20 07:18:15.837621] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:22:11.680 [2024-11-20 07:18:15.837653] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1192690 0 00:22:11.680 [2024-11-20 07:18:15.843964] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:11.680 [2024-11-20 07:18:15.843978] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:11.680 [2024-11-20 07:18:15.843982] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:11.680 [2024-11-20 07:18:15.843985] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:11.680 [2024-11-20 07:18:15.844018] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.680 [2024-11-20 07:18:15.844023] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.680 [2024-11-20 07:18:15.844026] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1192690) 00:22:11.680 [2024-11-20 07:18:15.844038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:11.680 [2024-11-20 07:18:15.844054] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f4100, cid 0, qid 0 00:22:11.680 [2024-11-20 07:18:15.851958] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.680 [2024-11-20 07:18:15.851967] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.680 [2024-11-20 07:18:15.851970] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.680 [2024-11-20 07:18:15.851975] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f4100) on tqpair=0x1192690 00:22:11.680 [2024-11-20 07:18:15.851986] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:11.680 [2024-11-20 07:18:15.851992] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:22:11.680 [2024-11-20 07:18:15.851997] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:22:11.680 [2024-11-20 07:18:15.852009] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.680 [2024-11-20 07:18:15.852013] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.680 [2024-11-20 07:18:15.852016] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1192690) 00:22:11.680 [2024-11-20 07:18:15.852022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.680 [2024-11-20 07:18:15.852035] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f4100, cid 0, qid 0 00:22:11.680 [2024-11-20 07:18:15.852204] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.680 [2024-11-20 07:18:15.852210] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.680 [2024-11-20 07:18:15.852213] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.680 [2024-11-20 07:18:15.852217] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f4100) on tqpair=0x1192690 00:22:11.680 [2024-11-20 07:18:15.852222] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:22:11.680 [2024-11-20 07:18:15.852229] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:22:11.680 [2024-11-20 07:18:15.852236] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.680 [2024-11-20 07:18:15.852239] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.680 [2024-11-20 07:18:15.852242] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1192690) 00:22:11.680 [2024-11-20 07:18:15.852248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.680 [2024-11-20 07:18:15.852258] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f4100, cid 0, qid 0 00:22:11.680 [2024-11-20 07:18:15.852333] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.680 [2024-11-20 07:18:15.852339] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.681 [2024-11-20 07:18:15.852342] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.681 [2024-11-20 07:18:15.852348] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f4100) on tqpair=0x1192690 00:22:11.681 [2024-11-20 07:18:15.852353] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:22:11.681 [2024-11-20 07:18:15.852360] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:11.681 [2024-11-20 07:18:15.852366] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.681 [2024-11-20 07:18:15.852370] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.681 [2024-11-20 07:18:15.852373] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1192690) 00:22:11.681 [2024-11-20 07:18:15.852379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.681 [2024-11-20 07:18:15.852388] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f4100, cid 0, qid 0 00:22:11.681 [2024-11-20 07:18:15.852454] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.681 [2024-11-20 07:18:15.852460] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.681 [2024-11-20 07:18:15.852463] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.681 [2024-11-20 07:18:15.852466] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f4100) on tqpair=0x1192690 00:22:11.681 [2024-11-20 07:18:15.852470] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:11.681 [2024-11-20 07:18:15.852479] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.681 [2024-11-20 07:18:15.852482] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.681 [2024-11-20 07:18:15.852486] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1192690) 00:22:11.681 [2024-11-20 07:18:15.852491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.681 [2024-11-20 07:18:15.852501] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f4100, cid 0, qid 0 00:22:11.681 [2024-11-20 07:18:15.852571] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.681 [2024-11-20 07:18:15.852577] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.681 [2024-11-20 07:18:15.852580] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.681 [2024-11-20 07:18:15.852583] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f4100) on tqpair=0x1192690 00:22:11.681 [2024-11-20 07:18:15.852587] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:11.681 [2024-11-20 07:18:15.852592] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:11.681 [2024-11-20 07:18:15.852598] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:11.681 [2024-11-20 07:18:15.852707] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:22:11.681 [2024-11-20 07:18:15.852712] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:11.681 [2024-11-20 07:18:15.852719] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.681 [2024-11-20 07:18:15.852722] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.681 [2024-11-20 07:18:15.852726] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1192690) 00:22:11.681 [2024-11-20 07:18:15.852731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.681 [2024-11-20 07:18:15.852741] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f4100, cid 0, qid 0 00:22:11.681 [2024-11-20 07:18:15.852809] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.681 [2024-11-20 07:18:15.852815] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.681 [2024-11-20 07:18:15.852818] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.681 [2024-11-20 07:18:15.852821] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f4100) on tqpair=0x1192690 00:22:11.681 [2024-11-20 07:18:15.852826] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:11.681 [2024-11-20 07:18:15.852834] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.681 [2024-11-20 07:18:15.852837] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.681 [2024-11-20 07:18:15.852840] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1192690) 00:22:11.681 [2024-11-20 07:18:15.852846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.681 [2024-11-20 07:18:15.852856] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f4100, cid 0, qid 0 00:22:11.681 [2024-11-20 07:18:15.852927] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.681 [2024-11-20 07:18:15.852933] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.681 [2024-11-20 07:18:15.852936] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.681 [2024-11-20 07:18:15.852940] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f4100) on tqpair=0x1192690 00:22:11.681 [2024-11-20 07:18:15.852944] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:11.681 [2024-11-20 07:18:15.852955] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:11.681 [2024-11-20 07:18:15.852962] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:22:11.681 [2024-11-20 07:18:15.852971] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:11.681 [2024-11-20 07:18:15.852979] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.681 [2024-11-20 07:18:15.852982] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1192690) 00:22:11.681 [2024-11-20 07:18:15.852988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.681 [2024-11-20 07:18:15.852998] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f4100, cid 0, qid 0 00:22:11.681 [2024-11-20 07:18:15.853096] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:11.681 [2024-11-20 07:18:15.853102] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:11.681 [2024-11-20 07:18:15.853106] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:11.681 [2024-11-20 07:18:15.853109] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1192690): datao=0, datal=4096, cccid=0 00:22:11.681 [2024-11-20 07:18:15.853113] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11f4100) on tqpair(0x1192690): expected_datao=0, payload_size=4096 00:22:11.681 [2024-11-20 07:18:15.853117] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.681 [2024-11-20 07:18:15.853124] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:11.681 [2024-11-20 07:18:15.853127] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:11.681 [2024-11-20 07:18:15.853143] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.681 [2024-11-20 07:18:15.853148] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.681 [2024-11-20 07:18:15.853151] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.681 [2024-11-20 07:18:15.853154] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f4100) on tqpair=0x1192690 00:22:11.681 [2024-11-20 07:18:15.853163] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:22:11.681 [2024-11-20 07:18:15.853168] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:22:11.681 [2024-11-20 07:18:15.853172] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:22:11.681 [2024-11-20 07:18:15.853178] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:22:11.681 [2024-11-20 07:18:15.853183] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:22:11.681 [2024-11-20 07:18:15.853187] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:22:11.681 [2024-11-20 07:18:15.853197] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:11.681 [2024-11-20 07:18:15.853203] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.681 [2024-11-20 07:18:15.853207] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.681 [2024-11-20 07:18:15.853210] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1192690) 00:22:11.681 [2024-11-20 07:18:15.853216] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:11.681 [2024-11-20 07:18:15.853226] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f4100, cid 0, qid 0 00:22:11.682 [2024-11-20 07:18:15.853298] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.682 [2024-11-20 07:18:15.853304] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.682 [2024-11-20 07:18:15.853307] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.682 [2024-11-20 07:18:15.853310] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f4100) on tqpair=0x1192690 00:22:11.682 [2024-11-20 07:18:15.853316] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.682 [2024-11-20 07:18:15.853319] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.682 [2024-11-20 07:18:15.853322] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1192690) 00:22:11.682 [2024-11-20 07:18:15.853328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.682 [2024-11-20 07:18:15.853334] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.682 [2024-11-20 07:18:15.853337] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.682 [2024-11-20 07:18:15.853340] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1192690) 00:22:11.682 [2024-11-20 07:18:15.853345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.682 [2024-11-20 07:18:15.853350] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.682 [2024-11-20 07:18:15.853353] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.682 [2024-11-20 07:18:15.853357] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1192690) 00:22:11.682 [2024-11-20 07:18:15.853362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.682 [2024-11-20 07:18:15.853367] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.682 [2024-11-20 07:18:15.853370] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.682 [2024-11-20 07:18:15.853373] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1192690) 00:22:11.682 [2024-11-20 07:18:15.853378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.682 [2024-11-20 07:18:15.853384] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:11.682 [2024-11-20 07:18:15.853393] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:11.682 [2024-11-20 07:18:15.853398] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.682 [2024-11-20 07:18:15.853402] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1192690) 00:22:11.682 [2024-11-20 07:18:15.853407] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.682 [2024-11-20 07:18:15.853419] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f4100, cid 0, qid 0 00:22:11.682 [2024-11-20 07:18:15.853424] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f4280, cid 1, qid 0 00:22:11.682 [2024-11-20 07:18:15.853428] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f4400, cid 2, qid 0 00:22:11.682 [2024-11-20 07:18:15.853432] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f4580, cid 3, qid 0 00:22:11.682 [2024-11-20 07:18:15.853436] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f4700, cid 4, qid 0 00:22:11.682 [2024-11-20 07:18:15.853537] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.682 [2024-11-20 07:18:15.853543] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.682 [2024-11-20 07:18:15.853546] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.682 [2024-11-20 07:18:15.853550] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f4700) on tqpair=0x1192690 00:22:11.682 [2024-11-20 07:18:15.853556] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:22:11.682 [2024-11-20 07:18:15.853561] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:22:11.682 [2024-11-20 07:18:15.853570] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.682 [2024-11-20 07:18:15.853573] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1192690) 00:22:11.682 [2024-11-20 07:18:15.853579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.682 [2024-11-20 07:18:15.853589] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f4700, cid 4, qid 0 00:22:11.682 [2024-11-20 07:18:15.853664] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:11.682 [2024-11-20 07:18:15.853670] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:11.682 [2024-11-20 07:18:15.853673] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:11.682 [2024-11-20 07:18:15.853676] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1192690): datao=0, datal=4096, cccid=4 00:22:11.682 [2024-11-20 07:18:15.853680] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11f4700) on tqpair(0x1192690): expected_datao=0, payload_size=4096 00:22:11.682 [2024-11-20 07:18:15.853684] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.682 [2024-11-20 07:18:15.853695] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:11.682 [2024-11-20 07:18:15.853698] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:11.682 [2024-11-20 07:18:15.894081] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.682 [2024-11-20 07:18:15.894094] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.682 [2024-11-20 07:18:15.894098] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.682 [2024-11-20 07:18:15.894102] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f4700) on tqpair=0x1192690 00:22:11.682 [2024-11-20 07:18:15.894114] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:22:11.682 [2024-11-20 07:18:15.894138] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.682 [2024-11-20 07:18:15.894145] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1192690) 00:22:11.682 [2024-11-20 07:18:15.894153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.682 [2024-11-20 07:18:15.894159] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.682 [2024-11-20 07:18:15.894163] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.682 [2024-11-20 07:18:15.894166] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1192690) 00:22:11.682 [2024-11-20 07:18:15.894172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.682 [2024-11-20 07:18:15.894187] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f4700, cid 4, qid 0 00:22:11.682 [2024-11-20 07:18:15.894192] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f4880, cid 5, qid 0 00:22:11.682 [2024-11-20 07:18:15.894294] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:11.682 [2024-11-20 07:18:15.894300] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:11.682 [2024-11-20 07:18:15.894303] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:11.682 [2024-11-20 07:18:15.894306] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1192690): datao=0, datal=1024, cccid=4 00:22:11.682 [2024-11-20 07:18:15.894310] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11f4700) on tqpair(0x1192690): expected_datao=0, payload_size=1024 00:22:11.682 [2024-11-20 07:18:15.894314] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.682 [2024-11-20 07:18:15.894320] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:11.682 [2024-11-20 07:18:15.894323] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:11.682 [2024-11-20 07:18:15.894328] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.682 [2024-11-20 07:18:15.894333] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.682 [2024-11-20 07:18:15.894336] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.682 [2024-11-20 07:18:15.894340] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f4880) on tqpair=0x1192690 00:22:11.682 [2024-11-20 07:18:15.939955] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.682 [2024-11-20 07:18:15.939964] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.682 [2024-11-20 07:18:15.939967] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.682 [2024-11-20 07:18:15.939970] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f4700) on tqpair=0x1192690 00:22:11.682 [2024-11-20 07:18:15.939980] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.682 [2024-11-20 07:18:15.939984] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1192690) 00:22:11.682 [2024-11-20 07:18:15.939991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.682 [2024-11-20 07:18:15.940007] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f4700, cid 4, qid 0 00:22:11.682 [2024-11-20 07:18:15.940169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:11.682 [2024-11-20 07:18:15.940175] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:11.682 [2024-11-20 07:18:15.940178] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:11.683 [2024-11-20 07:18:15.940181] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1192690): datao=0, datal=3072, cccid=4 00:22:11.683 [2024-11-20 07:18:15.940185] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11f4700) on tqpair(0x1192690): expected_datao=0, payload_size=3072 00:22:11.683 [2024-11-20 07:18:15.940189] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.683 [2024-11-20 07:18:15.940195] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:11.683 [2024-11-20 07:18:15.940198] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:11.683 [2024-11-20 07:18:15.940258] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.683 [2024-11-20 07:18:15.940264] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.683 [2024-11-20 07:18:15.940267] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.683 [2024-11-20 07:18:15.940270] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f4700) on tqpair=0x1192690 00:22:11.683 [2024-11-20 07:18:15.940279] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.683 [2024-11-20 07:18:15.940282] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1192690) 00:22:11.683 [2024-11-20 07:18:15.940288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.683 [2024-11-20 07:18:15.940301] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f4700, cid 4, qid 0 00:22:11.683 [2024-11-20 07:18:15.940409] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:11.683 [2024-11-20 07:18:15.940415] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:11.683 [2024-11-20 07:18:15.940418] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:11.683 [2024-11-20 07:18:15.940421] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1192690): datao=0, datal=8, cccid=4 00:22:11.683 [2024-11-20 07:18:15.940425] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11f4700) on tqpair(0x1192690): expected_datao=0, payload_size=8 00:22:11.683 [2024-11-20 07:18:15.940429] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.683 [2024-11-20 07:18:15.940434] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:11.683 [2024-11-20 07:18:15.940438] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:11.683 [2024-11-20 07:18:15.982131] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.683 [2024-11-20 07:18:15.982140] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.683 [2024-11-20 07:18:15.982143] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.683 [2024-11-20 07:18:15.982146] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f4700) on tqpair=0x1192690 00:22:11.683 ===================================================== 00:22:11.683 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:11.683 ===================================================== 00:22:11.683 Controller Capabilities/Features 00:22:11.683 ================================ 00:22:11.683 Vendor ID: 0000 00:22:11.683 Subsystem Vendor ID: 0000 00:22:11.683 Serial Number: .................... 00:22:11.683 Model Number: ........................................ 00:22:11.683 Firmware Version: 25.01 00:22:11.683 Recommended Arb Burst: 0 00:22:11.683 IEEE OUI Identifier: 00 00 00 00:22:11.683 Multi-path I/O 00:22:11.683 May have multiple subsystem ports: No 00:22:11.683 May have multiple controllers: No 00:22:11.683 Associated with SR-IOV VF: No 00:22:11.683 Max Data Transfer Size: 131072 00:22:11.683 Max Number of Namespaces: 0 00:22:11.683 Max Number of I/O Queues: 1024 00:22:11.683 NVMe Specification Version (VS): 1.3 00:22:11.683 NVMe Specification Version (Identify): 1.3 00:22:11.683 Maximum Queue Entries: 128 00:22:11.683 Contiguous Queues Required: Yes 00:22:11.683 Arbitration Mechanisms Supported 00:22:11.683 Weighted Round Robin: Not Supported 00:22:11.683 Vendor Specific: Not Supported 00:22:11.683 Reset Timeout: 15000 ms 00:22:11.683 Doorbell Stride: 4 bytes 00:22:11.683 NVM Subsystem Reset: Not Supported 00:22:11.683 Command Sets Supported 00:22:11.683 NVM Command Set: Supported 00:22:11.683 Boot Partition: Not Supported 00:22:11.683 Memory Page Size Minimum: 4096 bytes 00:22:11.683 Memory Page Size Maximum: 4096 bytes 00:22:11.683 Persistent Memory Region: Not Supported 00:22:11.683 Optional Asynchronous Events Supported 00:22:11.683 Namespace Attribute Notices: Not Supported 00:22:11.683 Firmware Activation Notices: Not Supported 00:22:11.683 ANA Change Notices: Not Supported 00:22:11.683 PLE Aggregate Log Change Notices: Not Supported 00:22:11.683 LBA Status Info Alert Notices: Not Supported 00:22:11.683 EGE Aggregate Log Change Notices: Not Supported 00:22:11.683 Normal NVM Subsystem Shutdown event: Not Supported 00:22:11.683 Zone Descriptor Change Notices: Not Supported 00:22:11.683 Discovery Log Change Notices: Supported 00:22:11.683 Controller Attributes 00:22:11.683 128-bit Host Identifier: Not Supported 00:22:11.683 Non-Operational Permissive Mode: Not Supported 00:22:11.683 NVM Sets: Not Supported 00:22:11.683 Read Recovery Levels: Not Supported 00:22:11.683 Endurance Groups: Not Supported 00:22:11.683 Predictable Latency Mode: Not Supported 00:22:11.683 Traffic Based Keep ALive: Not Supported 00:22:11.683 Namespace Granularity: Not Supported 00:22:11.683 SQ Associations: Not Supported 00:22:11.683 UUID List: Not Supported 00:22:11.683 Multi-Domain Subsystem: Not Supported 00:22:11.683 Fixed Capacity Management: Not Supported 00:22:11.683 Variable Capacity Management: Not Supported 00:22:11.683 Delete Endurance Group: Not Supported 00:22:11.683 Delete NVM Set: Not Supported 00:22:11.683 Extended LBA Formats Supported: Not Supported 00:22:11.683 Flexible Data Placement Supported: Not Supported 00:22:11.683 00:22:11.683 Controller Memory Buffer Support 00:22:11.683 ================================ 00:22:11.683 Supported: No 00:22:11.683 00:22:11.683 Persistent Memory Region Support 00:22:11.683 ================================ 00:22:11.683 Supported: No 00:22:11.683 00:22:11.683 Admin Command Set Attributes 00:22:11.683 ============================ 00:22:11.683 Security Send/Receive: Not Supported 00:22:11.683 Format NVM: Not Supported 00:22:11.683 Firmware Activate/Download: Not Supported 00:22:11.683 Namespace Management: Not Supported 00:22:11.683 Device Self-Test: Not Supported 00:22:11.683 Directives: Not Supported 00:22:11.683 NVMe-MI: Not Supported 00:22:11.683 Virtualization Management: Not Supported 00:22:11.683 Doorbell Buffer Config: Not Supported 00:22:11.683 Get LBA Status Capability: Not Supported 00:22:11.683 Command & Feature Lockdown Capability: Not Supported 00:22:11.683 Abort Command Limit: 1 00:22:11.683 Async Event Request Limit: 4 00:22:11.683 Number of Firmware Slots: N/A 00:22:11.683 Firmware Slot 1 Read-Only: N/A 00:22:11.683 Firmware Activation Without Reset: N/A 00:22:11.683 Multiple Update Detection Support: N/A 00:22:11.683 Firmware Update Granularity: No Information Provided 00:22:11.684 Per-Namespace SMART Log: No 00:22:11.684 Asymmetric Namespace Access Log Page: Not Supported 00:22:11.684 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:11.684 Command Effects Log Page: Not Supported 00:22:11.684 Get Log Page Extended Data: Supported 00:22:11.684 Telemetry Log Pages: Not Supported 00:22:11.684 Persistent Event Log Pages: Not Supported 00:22:11.684 Supported Log Pages Log Page: May Support 00:22:11.684 Commands Supported & Effects Log Page: Not Supported 00:22:11.684 Feature Identifiers & Effects Log Page:May Support 00:22:11.684 NVMe-MI Commands & Effects Log Page: May Support 00:22:11.684 Data Area 4 for Telemetry Log: Not Supported 00:22:11.684 Error Log Page Entries Supported: 128 00:22:11.684 Keep Alive: Not Supported 00:22:11.684 00:22:11.684 NVM Command Set Attributes 00:22:11.684 ========================== 00:22:11.684 Submission Queue Entry Size 00:22:11.684 Max: 1 00:22:11.684 Min: 1 00:22:11.684 Completion Queue Entry Size 00:22:11.684 Max: 1 00:22:11.684 Min: 1 00:22:11.684 Number of Namespaces: 0 00:22:11.684 Compare Command: Not Supported 00:22:11.684 Write Uncorrectable Command: Not Supported 00:22:11.684 Dataset Management Command: Not Supported 00:22:11.684 Write Zeroes Command: Not Supported 00:22:11.684 Set Features Save Field: Not Supported 00:22:11.684 Reservations: Not Supported 00:22:11.684 Timestamp: Not Supported 00:22:11.684 Copy: Not Supported 00:22:11.684 Volatile Write Cache: Not Present 00:22:11.684 Atomic Write Unit (Normal): 1 00:22:11.684 Atomic Write Unit (PFail): 1 00:22:11.684 Atomic Compare & Write Unit: 1 00:22:11.684 Fused Compare & Write: Supported 00:22:11.684 Scatter-Gather List 00:22:11.684 SGL Command Set: Supported 00:22:11.684 SGL Keyed: Supported 00:22:11.684 SGL Bit Bucket Descriptor: Not Supported 00:22:11.684 SGL Metadata Pointer: Not Supported 00:22:11.684 Oversized SGL: Not Supported 00:22:11.684 SGL Metadata Address: Not Supported 00:22:11.684 SGL Offset: Supported 00:22:11.684 Transport SGL Data Block: Not Supported 00:22:11.684 Replay Protected Memory Block: Not Supported 00:22:11.684 00:22:11.684 Firmware Slot Information 00:22:11.684 ========================= 00:22:11.684 Active slot: 0 00:22:11.684 00:22:11.684 00:22:11.684 Error Log 00:22:11.684 ========= 00:22:11.684 00:22:11.684 Active Namespaces 00:22:11.684 ================= 00:22:11.684 Discovery Log Page 00:22:11.684 ================== 00:22:11.684 Generation Counter: 2 00:22:11.684 Number of Records: 2 00:22:11.684 Record Format: 0 00:22:11.684 00:22:11.684 Discovery Log Entry 0 00:22:11.684 ---------------------- 00:22:11.684 Transport Type: 3 (TCP) 00:22:11.684 Address Family: 1 (IPv4) 00:22:11.684 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:11.684 Entry Flags: 00:22:11.684 Duplicate Returned Information: 1 00:22:11.684 Explicit Persistent Connection Support for Discovery: 1 00:22:11.684 Transport Requirements: 00:22:11.684 Secure Channel: Not Required 00:22:11.684 Port ID: 0 (0x0000) 00:22:11.684 Controller ID: 65535 (0xffff) 00:22:11.684 Admin Max SQ Size: 128 00:22:11.684 Transport Service Identifier: 4420 00:22:11.684 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:11.684 Transport Address: 10.0.0.2 00:22:11.684 Discovery Log Entry 1 00:22:11.684 ---------------------- 00:22:11.684 Transport Type: 3 (TCP) 00:22:11.684 Address Family: 1 (IPv4) 00:22:11.684 Subsystem Type: 2 (NVM Subsystem) 00:22:11.684 Entry Flags: 00:22:11.684 Duplicate Returned Information: 0 00:22:11.684 Explicit Persistent Connection Support for Discovery: 0 00:22:11.684 Transport Requirements: 00:22:11.684 Secure Channel: Not Required 00:22:11.684 Port ID: 0 (0x0000) 00:22:11.684 Controller ID: 65535 (0xffff) 00:22:11.684 Admin Max SQ Size: 128 00:22:11.684 Transport Service Identifier: 4420 00:22:11.684 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:11.684 Transport Address: 10.0.0.2 [2024-11-20 07:18:15.982230] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:22:11.684 [2024-11-20 07:18:15.982241] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f4100) on tqpair=0x1192690 00:22:11.684 [2024-11-20 07:18:15.982247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.684 [2024-11-20 07:18:15.982252] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f4280) on tqpair=0x1192690 00:22:11.684 [2024-11-20 07:18:15.982256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.684 [2024-11-20 07:18:15.982260] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f4400) on tqpair=0x1192690 00:22:11.684 [2024-11-20 07:18:15.982264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.684 [2024-11-20 07:18:15.982269] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f4580) on tqpair=0x1192690 00:22:11.684 [2024-11-20 07:18:15.982273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.684 [2024-11-20 07:18:15.982282] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.684 [2024-11-20 07:18:15.982286] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.684 [2024-11-20 07:18:15.982290] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1192690) 00:22:11.684 [2024-11-20 07:18:15.982296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.684 [2024-11-20 07:18:15.982309] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f4580, cid 3, qid 0 00:22:11.684 [2024-11-20 07:18:15.982375] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.684 [2024-11-20 07:18:15.982381] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.684 [2024-11-20 07:18:15.982384] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.684 [2024-11-20 07:18:15.982387] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f4580) on tqpair=0x1192690 00:22:11.684 [2024-11-20 07:18:15.982394] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.684 [2024-11-20 07:18:15.982397] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.684 [2024-11-20 07:18:15.982400] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1192690) 00:22:11.684 [2024-11-20 07:18:15.982406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.684 [2024-11-20 07:18:15.982418] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f4580, cid 3, qid 0 00:22:11.684 [2024-11-20 07:18:15.982526] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.684 [2024-11-20 07:18:15.982532] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.684 [2024-11-20 07:18:15.982535] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.684 [2024-11-20 07:18:15.982538] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f4580) on tqpair=0x1192690 00:22:11.684 [2024-11-20 07:18:15.982543] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:22:11.684 [2024-11-20 07:18:15.982547] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:22:11.684 [2024-11-20 07:18:15.982555] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.684 [2024-11-20 07:18:15.982559] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.684 [2024-11-20 07:18:15.982562] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1192690) 00:22:11.684 [2024-11-20 07:18:15.982567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.684 [2024-11-20 07:18:15.982577] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f4580, cid 3, qid 0 00:22:11.684 [2024-11-20 07:18:15.982679] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.684 [2024-11-20 07:18:15.982685] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.684 [2024-11-20 07:18:15.982688] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.685 [2024-11-20 07:18:15.982691] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f4580) on tqpair=0x1192690 00:22:11.685 [2024-11-20 07:18:15.982700] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.685 [2024-11-20 07:18:15.982703] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.685 [2024-11-20 07:18:15.982706] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1192690) 00:22:11.685 [2024-11-20 07:18:15.982712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.685 [2024-11-20 07:18:15.982722] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f4580, cid 3, qid 0 00:22:11.685 [2024-11-20 07:18:15.982829] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.685 [2024-11-20 07:18:15.982834] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.685 [2024-11-20 07:18:15.982837] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.685 [2024-11-20 07:18:15.982841] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f4580) on tqpair=0x1192690 00:22:11.685 [2024-11-20 07:18:15.982849] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.685 [2024-11-20 07:18:15.982853] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.685 [2024-11-20 07:18:15.982856] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1192690) 00:22:11.685 [2024-11-20 07:18:15.982861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.685 [2024-11-20 07:18:15.982873] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f4580, cid 3, qid 0 00:22:11.685 [2024-11-20 07:18:15.982941] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.685 [2024-11-20 07:18:15.982951] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.685 [2024-11-20 07:18:15.982955] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.685 [2024-11-20 07:18:15.982958] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f4580) on tqpair=0x1192690 00:22:11.685 [2024-11-20 07:18:15.982966] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.685 [2024-11-20 07:18:15.982970] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.685 [2024-11-20 07:18:15.982973] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1192690) 00:22:11.685 [2024-11-20 07:18:15.982979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.685 [2024-11-20 07:18:15.982988] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f4580, cid 3, qid 0 00:22:11.685 [2024-11-20 07:18:15.983083] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.685 [2024-11-20 07:18:15.983089] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.685 [2024-11-20 07:18:15.983092] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.685 [2024-11-20 07:18:15.983096] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f4580) on tqpair=0x1192690 00:22:11.685 [2024-11-20 07:18:15.983104] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.685 [2024-11-20 07:18:15.983107] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.685 [2024-11-20 07:18:15.983110] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1192690) 00:22:11.685 [2024-11-20 07:18:15.983116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.685 [2024-11-20 07:18:15.983126] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f4580, cid 3, qid 0 00:22:11.685 [2024-11-20 07:18:15.983233] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.685 [2024-11-20 07:18:15.983238] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.685 [2024-11-20 07:18:15.983241] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.685 [2024-11-20 07:18:15.983244] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f4580) on tqpair=0x1192690 00:22:11.685 [2024-11-20 07:18:15.983253] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.685 [2024-11-20 07:18:15.983256] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.685 [2024-11-20 07:18:15.983259] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1192690) 00:22:11.685 [2024-11-20 07:18:15.983265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.685 [2024-11-20 07:18:15.983274] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f4580, cid 3, qid 0 00:22:11.685 [2024-11-20 07:18:15.983383] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.685 [2024-11-20 07:18:15.983389] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.685 [2024-11-20 07:18:15.983392] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.685 [2024-11-20 07:18:15.983395] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f4580) on tqpair=0x1192690 00:22:11.685 [2024-11-20 07:18:15.983403] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.685 [2024-11-20 07:18:15.983407] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.685 [2024-11-20 07:18:15.983410] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1192690) 00:22:11.685 [2024-11-20 07:18:15.983416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.685 [2024-11-20 07:18:15.983427] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f4580, cid 3, qid 0 00:22:11.685 [2024-11-20 07:18:15.983503] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.685 [2024-11-20 07:18:15.983508] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.685 [2024-11-20 07:18:15.983511] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.685 [2024-11-20 07:18:15.983515] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f4580) on tqpair=0x1192690 00:22:11.685 [2024-11-20 07:18:15.983524] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.685 [2024-11-20 07:18:15.983527] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.685 [2024-11-20 07:18:15.983530] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1192690) 00:22:11.685 [2024-11-20 07:18:15.983536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.685 [2024-11-20 07:18:15.983546] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f4580, cid 3, qid 0 00:22:11.685 [2024-11-20 07:18:15.983637] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.685 [2024-11-20 07:18:15.983643] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.685 [2024-11-20 07:18:15.983646] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.685 [2024-11-20 07:18:15.983650] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f4580) on tqpair=0x1192690 00:22:11.685 [2024-11-20 07:18:15.983658] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.685 [2024-11-20 07:18:15.983661] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.685 [2024-11-20 07:18:15.983665] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1192690) 00:22:11.685 [2024-11-20 07:18:15.983670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.685 [2024-11-20 07:18:15.983680] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f4580, cid 3, qid 0 00:22:11.685 [2024-11-20 07:18:15.983787] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.685 [2024-11-20 07:18:15.983792] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.685 [2024-11-20 07:18:15.983795] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.685 [2024-11-20 07:18:15.983799] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f4580) on tqpair=0x1192690 00:22:11.685 [2024-11-20 07:18:15.983807] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.685 [2024-11-20 07:18:15.983810] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.685 [2024-11-20 07:18:15.983813] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1192690) 00:22:11.685 [2024-11-20 07:18:15.983819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.685 [2024-11-20 07:18:15.983829] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f4580, cid 3, qid 0 00:22:11.685 [2024-11-20 07:18:15.983938] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.685 [2024-11-20 07:18:15.983944] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.685 [2024-11-20 07:18:15.987952] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.685 [2024-11-20 07:18:15.987957] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f4580) on tqpair=0x1192690 00:22:11.685 [2024-11-20 07:18:15.987966] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.685 [2024-11-20 07:18:15.987970] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.685 [2024-11-20 07:18:15.987973] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1192690) 00:22:11.685 [2024-11-20 07:18:15.987979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.685 [2024-11-20 07:18:15.987989] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f4580, cid 3, qid 0 00:22:11.685 [2024-11-20 07:18:15.988143] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.686 [2024-11-20 07:18:15.988149] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.686 [2024-11-20 07:18:15.988152] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.686 [2024-11-20 07:18:15.988155] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f4580) on tqpair=0x1192690 00:22:11.686 [2024-11-20 07:18:15.988162] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:22:11.686 00:22:11.686 07:18:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:11.686 [2024-11-20 07:18:16.025998] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:22:11.686 [2024-11-20 07:18:16.026041] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1272879 ] 00:22:11.686 [2024-11-20 07:18:16.072810] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:22:11.686 [2024-11-20 07:18:16.072849] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:11.686 [2024-11-20 07:18:16.072854] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:11.686 [2024-11-20 07:18:16.072867] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:11.686 [2024-11-20 07:18:16.072876] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:11.686 [2024-11-20 07:18:16.073292] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:22:11.686 [2024-11-20 07:18:16.073318] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1df8690 0 00:22:11.686 [2024-11-20 07:18:16.083958] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:11.686 [2024-11-20 07:18:16.083974] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:11.686 [2024-11-20 07:18:16.083978] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:11.686 [2024-11-20 07:18:16.083981] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:11.686 [2024-11-20 07:18:16.084007] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.686 [2024-11-20 07:18:16.084012] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.686 [2024-11-20 07:18:16.084016] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df8690) 00:22:11.686 [2024-11-20 07:18:16.084026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:11.686 [2024-11-20 07:18:16.084044] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a100, cid 0, qid 0 00:22:11.686 [2024-11-20 07:18:16.094959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.686 [2024-11-20 07:18:16.094975] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.686 [2024-11-20 07:18:16.094979] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.686 [2024-11-20 07:18:16.094983] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a100) on tqpair=0x1df8690 00:22:11.686 [2024-11-20 07:18:16.094994] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:11.686 [2024-11-20 07:18:16.095001] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:22:11.686 [2024-11-20 07:18:16.095006] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:22:11.686 [2024-11-20 07:18:16.095020] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.686 [2024-11-20 07:18:16.095024] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.686 [2024-11-20 07:18:16.095028] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df8690) 00:22:11.686 [2024-11-20 07:18:16.095035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.686 [2024-11-20 07:18:16.095049] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a100, cid 0, qid 0 00:22:11.686 [2024-11-20 07:18:16.095206] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.686 [2024-11-20 07:18:16.095212] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.686 [2024-11-20 07:18:16.095215] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.686 [2024-11-20 07:18:16.095219] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a100) on tqpair=0x1df8690 00:22:11.686 [2024-11-20 07:18:16.095223] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:22:11.686 [2024-11-20 07:18:16.095230] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:22:11.686 [2024-11-20 07:18:16.095236] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.686 [2024-11-20 07:18:16.095240] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.686 [2024-11-20 07:18:16.095243] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df8690) 00:22:11.686 [2024-11-20 07:18:16.095249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.686 [2024-11-20 07:18:16.095260] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a100, cid 0, qid 0 00:22:11.686 [2024-11-20 07:18:16.095352] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.686 [2024-11-20 07:18:16.095358] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.686 [2024-11-20 07:18:16.095361] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.686 [2024-11-20 07:18:16.095365] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a100) on tqpair=0x1df8690 00:22:11.686 [2024-11-20 07:18:16.095369] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:22:11.686 [2024-11-20 07:18:16.095376] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:11.686 [2024-11-20 07:18:16.095381] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.686 [2024-11-20 07:18:16.095385] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.686 [2024-11-20 07:18:16.095388] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df8690) 00:22:11.686 [2024-11-20 07:18:16.095394] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.686 [2024-11-20 07:18:16.095404] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a100, cid 0, qid 0 00:22:11.686 [2024-11-20 07:18:16.095505] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.686 [2024-11-20 07:18:16.095511] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.686 [2024-11-20 07:18:16.095514] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.686 [2024-11-20 07:18:16.095517] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a100) on tqpair=0x1df8690 00:22:11.686 [2024-11-20 07:18:16.095521] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:11.686 [2024-11-20 07:18:16.095529] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.686 [2024-11-20 07:18:16.095533] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.686 [2024-11-20 07:18:16.095539] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df8690) 00:22:11.686 [2024-11-20 07:18:16.095544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.686 [2024-11-20 07:18:16.095554] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a100, cid 0, qid 0 00:22:11.686 [2024-11-20 07:18:16.095618] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.686 [2024-11-20 07:18:16.095624] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.686 [2024-11-20 07:18:16.095627] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.686 [2024-11-20 07:18:16.095630] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a100) on tqpair=0x1df8690 00:22:11.686 [2024-11-20 07:18:16.095635] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:11.686 [2024-11-20 07:18:16.095639] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:11.686 [2024-11-20 07:18:16.095645] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:11.686 [2024-11-20 07:18:16.095752] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:22:11.686 [2024-11-20 07:18:16.095756] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:11.686 [2024-11-20 07:18:16.095763] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.686 [2024-11-20 07:18:16.095767] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.686 [2024-11-20 07:18:16.095769] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df8690) 00:22:11.687 [2024-11-20 07:18:16.095775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.687 [2024-11-20 07:18:16.095785] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a100, cid 0, qid 0 00:22:11.687 [2024-11-20 07:18:16.095850] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.687 [2024-11-20 07:18:16.095856] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.687 [2024-11-20 07:18:16.095859] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.687 [2024-11-20 07:18:16.095862] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a100) on tqpair=0x1df8690 00:22:11.687 [2024-11-20 07:18:16.095866] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:11.687 [2024-11-20 07:18:16.095874] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.687 [2024-11-20 07:18:16.095878] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.687 [2024-11-20 07:18:16.095882] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df8690) 00:22:11.687 [2024-11-20 07:18:16.095887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.687 [2024-11-20 07:18:16.095897] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a100, cid 0, qid 0 00:22:11.687 [2024-11-20 07:18:16.096002] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.687 [2024-11-20 07:18:16.096008] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.687 [2024-11-20 07:18:16.096012] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.687 [2024-11-20 07:18:16.096015] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a100) on tqpair=0x1df8690 00:22:11.687 [2024-11-20 07:18:16.096018] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:11.687 [2024-11-20 07:18:16.096023] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:11.687 [2024-11-20 07:18:16.096031] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:22:11.687 [2024-11-20 07:18:16.096038] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:11.687 [2024-11-20 07:18:16.096045] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.687 [2024-11-20 07:18:16.096048] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df8690) 00:22:11.687 [2024-11-20 07:18:16.096054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.687 [2024-11-20 07:18:16.096065] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a100, cid 0, qid 0 00:22:11.687 [2024-11-20 07:18:16.096160] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:11.687 [2024-11-20 07:18:16.096166] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:11.687 [2024-11-20 07:18:16.096169] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:11.687 [2024-11-20 07:18:16.096173] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1df8690): datao=0, datal=4096, cccid=0 00:22:11.687 [2024-11-20 07:18:16.096177] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e5a100) on tqpair(0x1df8690): expected_datao=0, payload_size=4096 00:22:11.687 [2024-11-20 07:18:16.096180] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.687 [2024-11-20 07:18:16.096187] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:11.687 [2024-11-20 07:18:16.096190] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:11.687 [2024-11-20 07:18:16.096203] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.687 [2024-11-20 07:18:16.096209] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.687 [2024-11-20 07:18:16.096212] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.687 [2024-11-20 07:18:16.096215] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a100) on tqpair=0x1df8690 00:22:11.687 [2024-11-20 07:18:16.096221] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:22:11.687 [2024-11-20 07:18:16.096226] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:22:11.687 [2024-11-20 07:18:16.096229] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:22:11.687 [2024-11-20 07:18:16.096237] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:22:11.687 [2024-11-20 07:18:16.096241] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:22:11.687 [2024-11-20 07:18:16.096245] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:22:11.687 [2024-11-20 07:18:16.096253] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:11.687 [2024-11-20 07:18:16.096260] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.687 [2024-11-20 07:18:16.096263] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.687 [2024-11-20 07:18:16.096266] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df8690) 00:22:11.687 [2024-11-20 07:18:16.096272] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:11.687 [2024-11-20 07:18:16.096282] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a100, cid 0, qid 0 00:22:11.687 [2024-11-20 07:18:16.096356] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.687 [2024-11-20 07:18:16.096362] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.687 [2024-11-20 07:18:16.096365] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.687 [2024-11-20 07:18:16.096370] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a100) on tqpair=0x1df8690 00:22:11.687 [2024-11-20 07:18:16.096375] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.687 [2024-11-20 07:18:16.096378] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.687 [2024-11-20 07:18:16.096382] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1df8690) 00:22:11.687 [2024-11-20 07:18:16.096387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.687 [2024-11-20 07:18:16.096392] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.687 [2024-11-20 07:18:16.096396] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.687 [2024-11-20 07:18:16.096399] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1df8690) 00:22:11.687 [2024-11-20 07:18:16.096404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.687 [2024-11-20 07:18:16.096408] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.687 [2024-11-20 07:18:16.096412] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.687 [2024-11-20 07:18:16.096415] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1df8690) 00:22:11.687 [2024-11-20 07:18:16.096420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.687 [2024-11-20 07:18:16.096425] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.687 [2024-11-20 07:18:16.096428] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.687 [2024-11-20 07:18:16.096431] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df8690) 00:22:11.687 [2024-11-20 07:18:16.096436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.687 [2024-11-20 07:18:16.096440] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:11.687 [2024-11-20 07:18:16.096448] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:11.687 [2024-11-20 07:18:16.096454] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.687 [2024-11-20 07:18:16.096457] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1df8690) 00:22:11.687 [2024-11-20 07:18:16.096463] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.687 [2024-11-20 07:18:16.096474] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a100, cid 0, qid 0 00:22:11.687 [2024-11-20 07:18:16.096479] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a280, cid 1, qid 0 00:22:11.687 [2024-11-20 07:18:16.096483] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a400, cid 2, qid 0 00:22:11.687 [2024-11-20 07:18:16.096487] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a580, cid 3, qid 0 00:22:11.687 [2024-11-20 07:18:16.096491] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a700, cid 4, qid 0 00:22:11.687 [2024-11-20 07:18:16.096586] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.687 [2024-11-20 07:18:16.096592] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.687 [2024-11-20 07:18:16.096595] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.687 [2024-11-20 07:18:16.096598] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a700) on tqpair=0x1df8690 00:22:11.688 [2024-11-20 07:18:16.096604] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:22:11.688 [2024-11-20 07:18:16.096608] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:11.688 [2024-11-20 07:18:16.096618] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:22:11.688 [2024-11-20 07:18:16.096624] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:11.688 [2024-11-20 07:18:16.096630] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.688 [2024-11-20 07:18:16.096633] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.688 [2024-11-20 07:18:16.096636] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1df8690) 00:22:11.688 [2024-11-20 07:18:16.096641] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:11.688 [2024-11-20 07:18:16.096651] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a700, cid 4, qid 0 00:22:11.688 [2024-11-20 07:18:16.096757] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.688 [2024-11-20 07:18:16.096763] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.688 [2024-11-20 07:18:16.096766] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.688 [2024-11-20 07:18:16.096769] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a700) on tqpair=0x1df8690 00:22:11.688 [2024-11-20 07:18:16.096822] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:22:11.688 [2024-11-20 07:18:16.096831] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:11.688 [2024-11-20 07:18:16.096838] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.688 [2024-11-20 07:18:16.096841] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1df8690) 00:22:11.688 [2024-11-20 07:18:16.096846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.688 [2024-11-20 07:18:16.096856] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a700, cid 4, qid 0 00:22:11.688 [2024-11-20 07:18:16.096932] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:11.688 [2024-11-20 07:18:16.096938] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:11.688 [2024-11-20 07:18:16.096941] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:11.688 [2024-11-20 07:18:16.096945] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1df8690): datao=0, datal=4096, cccid=4 00:22:11.688 [2024-11-20 07:18:16.096954] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e5a700) on tqpair(0x1df8690): expected_datao=0, payload_size=4096 00:22:11.688 [2024-11-20 07:18:16.096958] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.688 [2024-11-20 07:18:16.096964] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:11.688 [2024-11-20 07:18:16.096967] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:11.688 [2024-11-20 07:18:16.097009] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.688 [2024-11-20 07:18:16.097014] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.688 [2024-11-20 07:18:16.097018] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.688 [2024-11-20 07:18:16.097021] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a700) on tqpair=0x1df8690 00:22:11.688 [2024-11-20 07:18:16.097029] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:22:11.688 [2024-11-20 07:18:16.097041] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:22:11.688 [2024-11-20 07:18:16.097049] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:22:11.688 [2024-11-20 07:18:16.097055] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.688 [2024-11-20 07:18:16.097062] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1df8690) 00:22:11.688 [2024-11-20 07:18:16.097067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.688 [2024-11-20 07:18:16.097077] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a700, cid 4, qid 0 00:22:11.688 [2024-11-20 07:18:16.097163] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:11.688 [2024-11-20 07:18:16.097169] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:11.688 [2024-11-20 07:18:16.097172] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:11.688 [2024-11-20 07:18:16.097175] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1df8690): datao=0, datal=4096, cccid=4 00:22:11.688 [2024-11-20 07:18:16.097178] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e5a700) on tqpair(0x1df8690): expected_datao=0, payload_size=4096 00:22:11.688 [2024-11-20 07:18:16.097182] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.688 [2024-11-20 07:18:16.097188] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:11.688 [2024-11-20 07:18:16.097191] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:11.688 [2024-11-20 07:18:16.097211] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.688 [2024-11-20 07:18:16.097216] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.688 [2024-11-20 07:18:16.097219] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.688 [2024-11-20 07:18:16.097223] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a700) on tqpair=0x1df8690 00:22:11.688 [2024-11-20 07:18:16.097232] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:11.688 [2024-11-20 07:18:16.097241] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:11.688 [2024-11-20 07:18:16.097247] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.688 [2024-11-20 07:18:16.097250] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1df8690) 00:22:11.688 [2024-11-20 07:18:16.097256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.688 [2024-11-20 07:18:16.097266] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a700, cid 4, qid 0 00:22:11.688 [2024-11-20 07:18:16.097341] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:11.688 [2024-11-20 07:18:16.097347] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:11.688 [2024-11-20 07:18:16.097350] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:11.688 [2024-11-20 07:18:16.097353] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1df8690): datao=0, datal=4096, cccid=4 00:22:11.688 [2024-11-20 07:18:16.097357] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e5a700) on tqpair(0x1df8690): expected_datao=0, payload_size=4096 00:22:11.688 [2024-11-20 07:18:16.097361] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.688 [2024-11-20 07:18:16.097366] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:11.688 [2024-11-20 07:18:16.097370] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:11.688 [2024-11-20 07:18:16.097411] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.688 [2024-11-20 07:18:16.097416] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.688 [2024-11-20 07:18:16.097419] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.688 [2024-11-20 07:18:16.097422] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a700) on tqpair=0x1df8690 00:22:11.688 [2024-11-20 07:18:16.097428] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:11.688 [2024-11-20 07:18:16.097436] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:22:11.688 [2024-11-20 07:18:16.097445] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:22:11.688 [2024-11-20 07:18:16.097450] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:11.688 [2024-11-20 07:18:16.097455] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:11.688 [2024-11-20 07:18:16.097459] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:22:11.688 [2024-11-20 07:18:16.097464] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:22:11.688 [2024-11-20 07:18:16.097468] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:22:11.688 [2024-11-20 07:18:16.097473] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:22:11.688 [2024-11-20 07:18:16.097485] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.688 [2024-11-20 07:18:16.097489] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1df8690) 00:22:11.689 [2024-11-20 07:18:16.097495] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.689 [2024-11-20 07:18:16.097500] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.689 [2024-11-20 07:18:16.097504] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.689 [2024-11-20 07:18:16.097507] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1df8690) 00:22:11.689 [2024-11-20 07:18:16.097512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.689 [2024-11-20 07:18:16.097524] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a700, cid 4, qid 0 00:22:11.689 [2024-11-20 07:18:16.097529] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a880, cid 5, qid 0 00:22:11.689 [2024-11-20 07:18:16.097643] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.689 [2024-11-20 07:18:16.097649] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.689 [2024-11-20 07:18:16.097652] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.689 [2024-11-20 07:18:16.097655] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a700) on tqpair=0x1df8690 00:22:11.689 [2024-11-20 07:18:16.097660] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.689 [2024-11-20 07:18:16.097665] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.689 [2024-11-20 07:18:16.097668] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.689 [2024-11-20 07:18:16.097672] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a880) on tqpair=0x1df8690 00:22:11.689 [2024-11-20 07:18:16.097680] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.689 [2024-11-20 07:18:16.097683] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1df8690) 00:22:11.689 [2024-11-20 07:18:16.097689] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.689 [2024-11-20 07:18:16.097698] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a880, cid 5, qid 0 00:22:11.689 [2024-11-20 07:18:16.097795] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.689 [2024-11-20 07:18:16.097800] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.689 [2024-11-20 07:18:16.097804] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.689 [2024-11-20 07:18:16.097807] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a880) on tqpair=0x1df8690 00:22:11.689 [2024-11-20 07:18:16.097817] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.689 [2024-11-20 07:18:16.097820] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1df8690) 00:22:11.689 [2024-11-20 07:18:16.097826] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.689 [2024-11-20 07:18:16.097835] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a880, cid 5, qid 0 00:22:11.689 [2024-11-20 07:18:16.097898] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.689 [2024-11-20 07:18:16.097904] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.689 [2024-11-20 07:18:16.097907] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.689 [2024-11-20 07:18:16.097910] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a880) on tqpair=0x1df8690 00:22:11.689 [2024-11-20 07:18:16.097918] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.689 [2024-11-20 07:18:16.097921] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1df8690) 00:22:11.689 [2024-11-20 07:18:16.097927] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.689 [2024-11-20 07:18:16.097936] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a880, cid 5, qid 0 00:22:11.689 [2024-11-20 07:18:16.098047] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.689 [2024-11-20 07:18:16.098053] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.689 [2024-11-20 07:18:16.098056] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.689 [2024-11-20 07:18:16.098059] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a880) on tqpair=0x1df8690 00:22:11.689 [2024-11-20 07:18:16.098072] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.689 [2024-11-20 07:18:16.098076] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1df8690) 00:22:11.689 [2024-11-20 07:18:16.098081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.689 [2024-11-20 07:18:16.098087] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.689 [2024-11-20 07:18:16.098091] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1df8690) 00:22:11.689 [2024-11-20 07:18:16.098096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.689 [2024-11-20 07:18:16.098102] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.689 [2024-11-20 07:18:16.098105] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1df8690) 00:22:11.689 [2024-11-20 07:18:16.098110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.689 [2024-11-20 07:18:16.098117] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.689 [2024-11-20 07:18:16.098120] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1df8690) 00:22:11.689 [2024-11-20 07:18:16.098125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.689 [2024-11-20 07:18:16.098136] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a880, cid 5, qid 0 00:22:11.689 [2024-11-20 07:18:16.098141] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a700, cid 4, qid 0 00:22:11.689 [2024-11-20 07:18:16.098145] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5aa00, cid 6, qid 0 00:22:11.689 [2024-11-20 07:18:16.098149] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5ab80, cid 7, qid 0 00:22:11.689 [2024-11-20 07:18:16.098287] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:11.689 [2024-11-20 07:18:16.098295] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:11.689 [2024-11-20 07:18:16.098298] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:11.689 [2024-11-20 07:18:16.098301] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1df8690): datao=0, datal=8192, cccid=5 00:22:11.689 [2024-11-20 07:18:16.098305] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e5a880) on tqpair(0x1df8690): expected_datao=0, payload_size=8192 00:22:11.689 [2024-11-20 07:18:16.098309] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.689 [2024-11-20 07:18:16.098359] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:11.689 [2024-11-20 07:18:16.098363] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:11.689 [2024-11-20 07:18:16.098368] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:11.689 [2024-11-20 07:18:16.098373] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:11.689 [2024-11-20 07:18:16.098375] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:11.689 [2024-11-20 07:18:16.098378] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1df8690): datao=0, datal=512, cccid=4 00:22:11.689 [2024-11-20 07:18:16.098382] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e5a700) on tqpair(0x1df8690): expected_datao=0, payload_size=512 00:22:11.689 [2024-11-20 07:18:16.098386] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.689 [2024-11-20 07:18:16.098392] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:11.689 [2024-11-20 07:18:16.098395] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:11.689 [2024-11-20 07:18:16.098400] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:11.689 [2024-11-20 07:18:16.098404] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:11.689 [2024-11-20 07:18:16.098407] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:11.689 [2024-11-20 07:18:16.098410] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1df8690): datao=0, datal=512, cccid=6 00:22:11.690 [2024-11-20 07:18:16.098414] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e5aa00) on tqpair(0x1df8690): expected_datao=0, payload_size=512 00:22:11.690 [2024-11-20 07:18:16.098418] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.690 [2024-11-20 07:18:16.098423] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:11.690 [2024-11-20 07:18:16.098426] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:11.690 [2024-11-20 07:18:16.098431] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:11.690 [2024-11-20 07:18:16.098436] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:11.690 [2024-11-20 07:18:16.098439] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:11.690 [2024-11-20 07:18:16.098442] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1df8690): datao=0, datal=4096, cccid=7 00:22:11.690 [2024-11-20 07:18:16.098446] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e5ab80) on tqpair(0x1df8690): expected_datao=0, payload_size=4096 00:22:11.690 [2024-11-20 07:18:16.098450] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.690 [2024-11-20 07:18:16.098455] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:11.690 [2024-11-20 07:18:16.098458] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:11.690 [2024-11-20 07:18:16.098465] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.690 [2024-11-20 07:18:16.098470] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.690 [2024-11-20 07:18:16.098473] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.690 [2024-11-20 07:18:16.098476] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a880) on tqpair=0x1df8690 00:22:11.690 [2024-11-20 07:18:16.098488] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.690 [2024-11-20 07:18:16.098493] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.690 [2024-11-20 07:18:16.098496] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.690 [2024-11-20 07:18:16.098501] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a700) on tqpair=0x1df8690 00:22:11.690 [2024-11-20 07:18:16.098509] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.690 [2024-11-20 07:18:16.098514] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.690 [2024-11-20 07:18:16.098517] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.690 [2024-11-20 07:18:16.098521] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5aa00) on tqpair=0x1df8690 00:22:11.690 [2024-11-20 07:18:16.098526] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.690 [2024-11-20 07:18:16.098531] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.690 [2024-11-20 07:18:16.098534] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.690 [2024-11-20 07:18:16.098538] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5ab80) on tqpair=0x1df8690 00:22:11.690 ===================================================== 00:22:11.690 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:11.690 ===================================================== 00:22:11.690 Controller Capabilities/Features 00:22:11.690 ================================ 00:22:11.690 Vendor ID: 8086 00:22:11.690 Subsystem Vendor ID: 8086 00:22:11.690 Serial Number: SPDK00000000000001 00:22:11.690 Model Number: SPDK bdev Controller 00:22:11.690 Firmware Version: 25.01 00:22:11.690 Recommended Arb Burst: 6 00:22:11.690 IEEE OUI Identifier: e4 d2 5c 00:22:11.690 Multi-path I/O 00:22:11.690 May have multiple subsystem ports: Yes 00:22:11.690 May have multiple controllers: Yes 00:22:11.690 Associated with SR-IOV VF: No 00:22:11.690 Max Data Transfer Size: 131072 00:22:11.690 Max Number of Namespaces: 32 00:22:11.690 Max Number of I/O Queues: 127 00:22:11.690 NVMe Specification Version (VS): 1.3 00:22:11.690 NVMe Specification Version (Identify): 1.3 00:22:11.690 Maximum Queue Entries: 128 00:22:11.690 Contiguous Queues Required: Yes 00:22:11.690 Arbitration Mechanisms Supported 00:22:11.690 Weighted Round Robin: Not Supported 00:22:11.690 Vendor Specific: Not Supported 00:22:11.690 Reset Timeout: 15000 ms 00:22:11.690 Doorbell Stride: 4 bytes 00:22:11.690 NVM Subsystem Reset: Not Supported 00:22:11.690 Command Sets Supported 00:22:11.690 NVM Command Set: Supported 00:22:11.690 Boot Partition: Not Supported 00:22:11.690 Memory Page Size Minimum: 4096 bytes 00:22:11.690 Memory Page Size Maximum: 4096 bytes 00:22:11.690 Persistent Memory Region: Not Supported 00:22:11.690 Optional Asynchronous Events Supported 00:22:11.690 Namespace Attribute Notices: Supported 00:22:11.690 Firmware Activation Notices: Not Supported 00:22:11.690 ANA Change Notices: Not Supported 00:22:11.690 PLE Aggregate Log Change Notices: Not Supported 00:22:11.690 LBA Status Info Alert Notices: Not Supported 00:22:11.690 EGE Aggregate Log Change Notices: Not Supported 00:22:11.690 Normal NVM Subsystem Shutdown event: Not Supported 00:22:11.690 Zone Descriptor Change Notices: Not Supported 00:22:11.690 Discovery Log Change Notices: Not Supported 00:22:11.690 Controller Attributes 00:22:11.690 128-bit Host Identifier: Supported 00:22:11.690 Non-Operational Permissive Mode: Not Supported 00:22:11.690 NVM Sets: Not Supported 00:22:11.690 Read Recovery Levels: Not Supported 00:22:11.690 Endurance Groups: Not Supported 00:22:11.690 Predictable Latency Mode: Not Supported 00:22:11.690 Traffic Based Keep ALive: Not Supported 00:22:11.690 Namespace Granularity: Not Supported 00:22:11.690 SQ Associations: Not Supported 00:22:11.690 UUID List: Not Supported 00:22:11.690 Multi-Domain Subsystem: Not Supported 00:22:11.690 Fixed Capacity Management: Not Supported 00:22:11.690 Variable Capacity Management: Not Supported 00:22:11.690 Delete Endurance Group: Not Supported 00:22:11.690 Delete NVM Set: Not Supported 00:22:11.690 Extended LBA Formats Supported: Not Supported 00:22:11.690 Flexible Data Placement Supported: Not Supported 00:22:11.690 00:22:11.690 Controller Memory Buffer Support 00:22:11.690 ================================ 00:22:11.690 Supported: No 00:22:11.690 00:22:11.690 Persistent Memory Region Support 00:22:11.690 ================================ 00:22:11.690 Supported: No 00:22:11.690 00:22:11.690 Admin Command Set Attributes 00:22:11.690 ============================ 00:22:11.690 Security Send/Receive: Not Supported 00:22:11.690 Format NVM: Not Supported 00:22:11.690 Firmware Activate/Download: Not Supported 00:22:11.690 Namespace Management: Not Supported 00:22:11.690 Device Self-Test: Not Supported 00:22:11.690 Directives: Not Supported 00:22:11.690 NVMe-MI: Not Supported 00:22:11.690 Virtualization Management: Not Supported 00:22:11.690 Doorbell Buffer Config: Not Supported 00:22:11.690 Get LBA Status Capability: Not Supported 00:22:11.690 Command & Feature Lockdown Capability: Not Supported 00:22:11.690 Abort Command Limit: 4 00:22:11.690 Async Event Request Limit: 4 00:22:11.690 Number of Firmware Slots: N/A 00:22:11.690 Firmware Slot 1 Read-Only: N/A 00:22:11.690 Firmware Activation Without Reset: N/A 00:22:11.690 Multiple Update Detection Support: N/A 00:22:11.690 Firmware Update Granularity: No Information Provided 00:22:11.690 Per-Namespace SMART Log: No 00:22:11.690 Asymmetric Namespace Access Log Page: Not Supported 00:22:11.690 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:11.690 Command Effects Log Page: Supported 00:22:11.690 Get Log Page Extended Data: Supported 00:22:11.690 Telemetry Log Pages: Not Supported 00:22:11.690 Persistent Event Log Pages: Not Supported 00:22:11.690 Supported Log Pages Log Page: May Support 00:22:11.690 Commands Supported & Effects Log Page: Not Supported 00:22:11.690 Feature Identifiers & Effects Log Page:May Support 00:22:11.690 NVMe-MI Commands & Effects Log Page: May Support 00:22:11.690 Data Area 4 for Telemetry Log: Not Supported 00:22:11.690 Error Log Page Entries Supported: 128 00:22:11.690 Keep Alive: Supported 00:22:11.690 Keep Alive Granularity: 10000 ms 00:22:11.690 00:22:11.690 NVM Command Set Attributes 00:22:11.690 ========================== 00:22:11.690 Submission Queue Entry Size 00:22:11.690 Max: 64 00:22:11.690 Min: 64 00:22:11.691 Completion Queue Entry Size 00:22:11.691 Max: 16 00:22:11.691 Min: 16 00:22:11.691 Number of Namespaces: 32 00:22:11.691 Compare Command: Supported 00:22:11.691 Write Uncorrectable Command: Not Supported 00:22:11.691 Dataset Management Command: Supported 00:22:11.691 Write Zeroes Command: Supported 00:22:11.691 Set Features Save Field: Not Supported 00:22:11.691 Reservations: Supported 00:22:11.691 Timestamp: Not Supported 00:22:11.691 Copy: Supported 00:22:11.691 Volatile Write Cache: Present 00:22:11.691 Atomic Write Unit (Normal): 1 00:22:11.691 Atomic Write Unit (PFail): 1 00:22:11.691 Atomic Compare & Write Unit: 1 00:22:11.691 Fused Compare & Write: Supported 00:22:11.691 Scatter-Gather List 00:22:11.691 SGL Command Set: Supported 00:22:11.691 SGL Keyed: Supported 00:22:11.691 SGL Bit Bucket Descriptor: Not Supported 00:22:11.691 SGL Metadata Pointer: Not Supported 00:22:11.691 Oversized SGL: Not Supported 00:22:11.691 SGL Metadata Address: Not Supported 00:22:11.691 SGL Offset: Supported 00:22:11.691 Transport SGL Data Block: Not Supported 00:22:11.691 Replay Protected Memory Block: Not Supported 00:22:11.691 00:22:11.691 Firmware Slot Information 00:22:11.691 ========================= 00:22:11.691 Active slot: 1 00:22:11.691 Slot 1 Firmware Revision: 25.01 00:22:11.691 00:22:11.691 00:22:11.691 Commands Supported and Effects 00:22:11.691 ============================== 00:22:11.691 Admin Commands 00:22:11.691 -------------- 00:22:11.691 Get Log Page (02h): Supported 00:22:11.691 Identify (06h): Supported 00:22:11.691 Abort (08h): Supported 00:22:11.691 Set Features (09h): Supported 00:22:11.691 Get Features (0Ah): Supported 00:22:11.691 Asynchronous Event Request (0Ch): Supported 00:22:11.691 Keep Alive (18h): Supported 00:22:11.691 I/O Commands 00:22:11.691 ------------ 00:22:11.691 Flush (00h): Supported LBA-Change 00:22:11.691 Write (01h): Supported LBA-Change 00:22:11.691 Read (02h): Supported 00:22:11.691 Compare (05h): Supported 00:22:11.691 Write Zeroes (08h): Supported LBA-Change 00:22:11.691 Dataset Management (09h): Supported LBA-Change 00:22:11.691 Copy (19h): Supported LBA-Change 00:22:11.691 00:22:11.691 Error Log 00:22:11.691 ========= 00:22:11.691 00:22:11.691 Arbitration 00:22:11.691 =========== 00:22:11.691 Arbitration Burst: 1 00:22:11.691 00:22:11.691 Power Management 00:22:11.691 ================ 00:22:11.691 Number of Power States: 1 00:22:11.691 Current Power State: Power State #0 00:22:11.691 Power State #0: 00:22:11.691 Max Power: 0.00 W 00:22:11.691 Non-Operational State: Operational 00:22:11.691 Entry Latency: Not Reported 00:22:11.691 Exit Latency: Not Reported 00:22:11.691 Relative Read Throughput: 0 00:22:11.691 Relative Read Latency: 0 00:22:11.691 Relative Write Throughput: 0 00:22:11.691 Relative Write Latency: 0 00:22:11.691 Idle Power: Not Reported 00:22:11.691 Active Power: Not Reported 00:22:11.691 Non-Operational Permissive Mode: Not Supported 00:22:11.691 00:22:11.691 Health Information 00:22:11.691 ================== 00:22:11.691 Critical Warnings: 00:22:11.691 Available Spare Space: OK 00:22:11.691 Temperature: OK 00:22:11.691 Device Reliability: OK 00:22:11.691 Read Only: No 00:22:11.691 Volatile Memory Backup: OK 00:22:11.691 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:11.691 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:11.691 Available Spare: 0% 00:22:11.691 Available Spare Threshold: 0% 00:22:11.691 Life Percentage Used:[2024-11-20 07:18:16.098619] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.691 [2024-11-20 07:18:16.098623] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1df8690) 00:22:11.691 [2024-11-20 07:18:16.098629] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.691 [2024-11-20 07:18:16.098640] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5ab80, cid 7, qid 0 00:22:11.691 [2024-11-20 07:18:16.098758] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.691 [2024-11-20 07:18:16.098764] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.691 [2024-11-20 07:18:16.098767] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.691 [2024-11-20 07:18:16.098771] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5ab80) on tqpair=0x1df8690 00:22:11.691 [2024-11-20 07:18:16.098796] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:22:11.691 [2024-11-20 07:18:16.098804] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a100) on tqpair=0x1df8690 00:22:11.691 [2024-11-20 07:18:16.098809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.691 [2024-11-20 07:18:16.098814] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a280) on tqpair=0x1df8690 00:22:11.691 [2024-11-20 07:18:16.098818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.691 [2024-11-20 07:18:16.098822] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a400) on tqpair=0x1df8690 00:22:11.691 [2024-11-20 07:18:16.098826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.691 [2024-11-20 07:18:16.098830] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a580) on tqpair=0x1df8690 00:22:11.691 [2024-11-20 07:18:16.098834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.691 [2024-11-20 07:18:16.098841] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.691 [2024-11-20 07:18:16.098844] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.691 [2024-11-20 07:18:16.098847] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df8690) 00:22:11.691 [2024-11-20 07:18:16.098853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.691 [2024-11-20 07:18:16.098864] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a580, cid 3, qid 0 00:22:11.691 [2024-11-20 07:18:16.098929] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.691 [2024-11-20 07:18:16.098935] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.691 [2024-11-20 07:18:16.098938] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.691 [2024-11-20 07:18:16.098941] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a580) on tqpair=0x1df8690 00:22:11.691 [2024-11-20 07:18:16.102953] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.691 [2024-11-20 07:18:16.102958] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.691 [2024-11-20 07:18:16.102962] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df8690) 00:22:11.691 [2024-11-20 07:18:16.102967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.691 [2024-11-20 07:18:16.102982] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a580, cid 3, qid 0 00:22:11.691 [2024-11-20 07:18:16.103187] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.691 [2024-11-20 07:18:16.103193] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.691 [2024-11-20 07:18:16.103196] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.691 [2024-11-20 07:18:16.103200] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a580) on tqpair=0x1df8690 00:22:11.691 [2024-11-20 07:18:16.103204] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:22:11.691 [2024-11-20 07:18:16.103208] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:22:11.691 [2024-11-20 07:18:16.103216] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.691 [2024-11-20 07:18:16.103219] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.691 [2024-11-20 07:18:16.103223] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df8690) 00:22:11.691 [2024-11-20 07:18:16.103228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.691 [2024-11-20 07:18:16.103238] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a580, cid 3, qid 0 00:22:11.691 [2024-11-20 07:18:16.103299] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.691 [2024-11-20 07:18:16.103305] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.692 [2024-11-20 07:18:16.103308] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.692 [2024-11-20 07:18:16.103312] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a580) on tqpair=0x1df8690 00:22:11.692 [2024-11-20 07:18:16.103320] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.692 [2024-11-20 07:18:16.103324] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.692 [2024-11-20 07:18:16.103327] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df8690) 00:22:11.692 [2024-11-20 07:18:16.103332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.692 [2024-11-20 07:18:16.103342] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a580, cid 3, qid 0 00:22:11.692 [2024-11-20 07:18:16.103436] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.692 [2024-11-20 07:18:16.103442] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.692 [2024-11-20 07:18:16.103445] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.692 [2024-11-20 07:18:16.103448] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a580) on tqpair=0x1df8690 00:22:11.692 [2024-11-20 07:18:16.103456] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.692 [2024-11-20 07:18:16.103460] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.692 [2024-11-20 07:18:16.103463] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df8690) 00:22:11.692 [2024-11-20 07:18:16.103469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.692 [2024-11-20 07:18:16.103478] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a580, cid 3, qid 0 00:22:11.692 [2024-11-20 07:18:16.103588] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.692 [2024-11-20 07:18:16.103594] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.692 [2024-11-20 07:18:16.103600] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.692 [2024-11-20 07:18:16.103604] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a580) on tqpair=0x1df8690 00:22:11.692 [2024-11-20 07:18:16.103612] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.692 [2024-11-20 07:18:16.103615] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.692 [2024-11-20 07:18:16.103619] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df8690) 00:22:11.692 [2024-11-20 07:18:16.103624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.692 [2024-11-20 07:18:16.103633] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a580, cid 3, qid 0 00:22:11.692 [2024-11-20 07:18:16.103739] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.692 [2024-11-20 07:18:16.103745] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.692 [2024-11-20 07:18:16.103747] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.692 [2024-11-20 07:18:16.103751] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a580) on tqpair=0x1df8690 00:22:11.692 [2024-11-20 07:18:16.103759] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.692 [2024-11-20 07:18:16.103763] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.692 [2024-11-20 07:18:16.103766] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df8690) 00:22:11.692 [2024-11-20 07:18:16.103771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.692 [2024-11-20 07:18:16.103781] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a580, cid 3, qid 0 00:22:11.692 [2024-11-20 07:18:16.103857] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.692 [2024-11-20 07:18:16.103862] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.692 [2024-11-20 07:18:16.103865] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.692 [2024-11-20 07:18:16.103869] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a580) on tqpair=0x1df8690 00:22:11.692 [2024-11-20 07:18:16.103878] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.692 [2024-11-20 07:18:16.103881] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.692 [2024-11-20 07:18:16.103884] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df8690) 00:22:11.692 [2024-11-20 07:18:16.103890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.692 [2024-11-20 07:18:16.103900] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a580, cid 3, qid 0 00:22:11.692 [2024-11-20 07:18:16.103992] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.692 [2024-11-20 07:18:16.103998] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.692 [2024-11-20 07:18:16.104001] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.692 [2024-11-20 07:18:16.104005] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a580) on tqpair=0x1df8690 00:22:11.692 [2024-11-20 07:18:16.104013] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.692 [2024-11-20 07:18:16.104016] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.692 [2024-11-20 07:18:16.104020] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df8690) 00:22:11.692 [2024-11-20 07:18:16.104025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.692 [2024-11-20 07:18:16.104035] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a580, cid 3, qid 0 00:22:11.692 [2024-11-20 07:18:16.104143] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.692 [2024-11-20 07:18:16.104149] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.692 [2024-11-20 07:18:16.104152] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.692 [2024-11-20 07:18:16.104157] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a580) on tqpair=0x1df8690 00:22:11.692 [2024-11-20 07:18:16.104165] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.692 [2024-11-20 07:18:16.104169] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.692 [2024-11-20 07:18:16.104172] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df8690) 00:22:11.692 [2024-11-20 07:18:16.104178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.692 [2024-11-20 07:18:16.104187] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a580, cid 3, qid 0 00:22:11.692 [2024-11-20 07:18:16.104293] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.692 [2024-11-20 07:18:16.104299] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.692 [2024-11-20 07:18:16.104302] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.692 [2024-11-20 07:18:16.104305] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a580) on tqpair=0x1df8690 00:22:11.692 [2024-11-20 07:18:16.104313] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.692 [2024-11-20 07:18:16.104317] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.692 [2024-11-20 07:18:16.104320] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df8690) 00:22:11.692 [2024-11-20 07:18:16.104325] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.692 [2024-11-20 07:18:16.104335] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a580, cid 3, qid 0 00:22:11.692 [2024-11-20 07:18:16.104398] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.692 [2024-11-20 07:18:16.104404] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.692 [2024-11-20 07:18:16.104407] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.692 [2024-11-20 07:18:16.104410] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a580) on tqpair=0x1df8690 00:22:11.692 [2024-11-20 07:18:16.104418] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.692 [2024-11-20 07:18:16.104422] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.692 [2024-11-20 07:18:16.104425] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df8690) 00:22:11.692 [2024-11-20 07:18:16.104430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.692 [2024-11-20 07:18:16.104440] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a580, cid 3, qid 0 00:22:11.692 [2024-11-20 07:18:16.104545] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.692 [2024-11-20 07:18:16.104551] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.692 [2024-11-20 07:18:16.104554] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.692 [2024-11-20 07:18:16.104557] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a580) on tqpair=0x1df8690 00:22:11.692 [2024-11-20 07:18:16.104565] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.692 [2024-11-20 07:18:16.104569] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.692 [2024-11-20 07:18:16.104572] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df8690) 00:22:11.692 [2024-11-20 07:18:16.104577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.692 [2024-11-20 07:18:16.104586] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a580, cid 3, qid 0 00:22:11.693 [2024-11-20 07:18:16.104697] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.693 [2024-11-20 07:18:16.104702] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.693 [2024-11-20 07:18:16.104705] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.693 [2024-11-20 07:18:16.104708] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a580) on tqpair=0x1df8690 00:22:11.693 [2024-11-20 07:18:16.104718] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.693 [2024-11-20 07:18:16.104722] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.693 [2024-11-20 07:18:16.104725] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df8690) 00:22:11.693 [2024-11-20 07:18:16.104730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.693 [2024-11-20 07:18:16.104740] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a580, cid 3, qid 0 00:22:11.693 [2024-11-20 07:18:16.104848] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.693 [2024-11-20 07:18:16.104854] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.693 [2024-11-20 07:18:16.104857] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.693 [2024-11-20 07:18:16.104860] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a580) on tqpair=0x1df8690 00:22:11.693 [2024-11-20 07:18:16.104868] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.693 [2024-11-20 07:18:16.104872] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.693 [2024-11-20 07:18:16.104875] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df8690) 00:22:11.693 [2024-11-20 07:18:16.104880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.693 [2024-11-20 07:18:16.104890] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a580, cid 3, qid 0 00:22:11.693 [2024-11-20 07:18:16.104958] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.693 [2024-11-20 07:18:16.104964] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.693 [2024-11-20 07:18:16.104967] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.693 [2024-11-20 07:18:16.104970] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a580) on tqpair=0x1df8690 00:22:11.693 [2024-11-20 07:18:16.104979] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.693 [2024-11-20 07:18:16.104983] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.693 [2024-11-20 07:18:16.104986] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df8690) 00:22:11.693 [2024-11-20 07:18:16.104991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.693 [2024-11-20 07:18:16.105001] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a580, cid 3, qid 0 00:22:11.693 [2024-11-20 07:18:16.105100] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.693 [2024-11-20 07:18:16.105106] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.693 [2024-11-20 07:18:16.105109] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.693 [2024-11-20 07:18:16.105112] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a580) on tqpair=0x1df8690 00:22:11.693 [2024-11-20 07:18:16.105120] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.693 [2024-11-20 07:18:16.105124] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.693 [2024-11-20 07:18:16.105127] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df8690) 00:22:11.693 [2024-11-20 07:18:16.105133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.693 [2024-11-20 07:18:16.105142] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a580, cid 3, qid 0 00:22:11.693 [2024-11-20 07:18:16.105251] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.693 [2024-11-20 07:18:16.105256] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.693 [2024-11-20 07:18:16.105259] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.693 [2024-11-20 07:18:16.105262] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a580) on tqpair=0x1df8690 00:22:11.693 [2024-11-20 07:18:16.105270] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.693 [2024-11-20 07:18:16.105276] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.693 [2024-11-20 07:18:16.105279] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df8690) 00:22:11.693 [2024-11-20 07:18:16.105284] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.693 [2024-11-20 07:18:16.105294] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a580, cid 3, qid 0 00:22:11.693 [2024-11-20 07:18:16.105352] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.693 [2024-11-20 07:18:16.105358] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.693 [2024-11-20 07:18:16.105361] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.693 [2024-11-20 07:18:16.105364] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a580) on tqpair=0x1df8690 00:22:11.693 [2024-11-20 07:18:16.105372] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.693 [2024-11-20 07:18:16.105376] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.693 [2024-11-20 07:18:16.105379] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df8690) 00:22:11.693 [2024-11-20 07:18:16.105384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.693 [2024-11-20 07:18:16.105394] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a580, cid 3, qid 0 00:22:11.693 [2024-11-20 07:18:16.105452] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.693 [2024-11-20 07:18:16.105458] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.693 [2024-11-20 07:18:16.105461] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.693 [2024-11-20 07:18:16.105464] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a580) on tqpair=0x1df8690 00:22:11.693 [2024-11-20 07:18:16.105473] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.693 [2024-11-20 07:18:16.105476] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.693 [2024-11-20 07:18:16.105479] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df8690) 00:22:11.693 [2024-11-20 07:18:16.105485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.693 [2024-11-20 07:18:16.105494] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a580, cid 3, qid 0 00:22:11.693 [2024-11-20 07:18:16.105605] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.693 [2024-11-20 07:18:16.105611] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.693 [2024-11-20 07:18:16.105614] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.693 [2024-11-20 07:18:16.105617] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a580) on tqpair=0x1df8690 00:22:11.693 [2024-11-20 07:18:16.105625] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.693 [2024-11-20 07:18:16.105629] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.693 [2024-11-20 07:18:16.105632] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df8690) 00:22:11.693 [2024-11-20 07:18:16.105637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.693 [2024-11-20 07:18:16.105647] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a580, cid 3, qid 0 00:22:11.693 [2024-11-20 07:18:16.105754] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.693 [2024-11-20 07:18:16.105760] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.693 [2024-11-20 07:18:16.105763] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.693 [2024-11-20 07:18:16.105766] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a580) on tqpair=0x1df8690 00:22:11.693 [2024-11-20 07:18:16.105774] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.693 [2024-11-20 07:18:16.105778] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.693 [2024-11-20 07:18:16.105781] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df8690) 00:22:11.693 [2024-11-20 07:18:16.105788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.693 [2024-11-20 07:18:16.105797] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a580, cid 3, qid 0 00:22:11.693 [2024-11-20 07:18:16.105906] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.693 [2024-11-20 07:18:16.105912] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.693 [2024-11-20 07:18:16.105915] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.693 [2024-11-20 07:18:16.105918] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a580) on tqpair=0x1df8690 00:22:11.693 [2024-11-20 07:18:16.105926] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.693 [2024-11-20 07:18:16.105930] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.693 [2024-11-20 07:18:16.105933] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df8690) 00:22:11.693 [2024-11-20 07:18:16.105938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.694 [2024-11-20 07:18:16.105951] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a580, cid 3, qid 0 00:22:11.694 [2024-11-20 07:18:16.106015] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.694 [2024-11-20 07:18:16.106020] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.694 [2024-11-20 07:18:16.106024] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.694 [2024-11-20 07:18:16.106027] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a580) on tqpair=0x1df8690 00:22:11.694 [2024-11-20 07:18:16.106036] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.694 [2024-11-20 07:18:16.106039] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.694 [2024-11-20 07:18:16.106042] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df8690) 00:22:11.694 [2024-11-20 07:18:16.106048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.694 [2024-11-20 07:18:16.106058] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a580, cid 3, qid 0 00:22:11.694 [2024-11-20 07:18:16.106159] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.694 [2024-11-20 07:18:16.106164] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.694 [2024-11-20 07:18:16.106167] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.694 [2024-11-20 07:18:16.106171] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a580) on tqpair=0x1df8690 00:22:11.694 [2024-11-20 07:18:16.106179] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.694 [2024-11-20 07:18:16.106183] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.694 [2024-11-20 07:18:16.106186] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df8690) 00:22:11.694 [2024-11-20 07:18:16.106191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.694 [2024-11-20 07:18:16.106200] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a580, cid 3, qid 0 00:22:11.694 [2024-11-20 07:18:16.106311] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.694 [2024-11-20 07:18:16.106316] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.694 [2024-11-20 07:18:16.106319] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.694 [2024-11-20 07:18:16.106322] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a580) on tqpair=0x1df8690 00:22:11.694 [2024-11-20 07:18:16.106331] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.694 [2024-11-20 07:18:16.106334] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.694 [2024-11-20 07:18:16.106337] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df8690) 00:22:11.694 [2024-11-20 07:18:16.106344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.694 [2024-11-20 07:18:16.106354] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a580, cid 3, qid 0 00:22:11.694 [2024-11-20 07:18:16.106464] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.694 [2024-11-20 07:18:16.106470] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.694 [2024-11-20 07:18:16.106473] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.694 [2024-11-20 07:18:16.106476] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a580) on tqpair=0x1df8690 00:22:11.694 [2024-11-20 07:18:16.106484] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.694 [2024-11-20 07:18:16.106487] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.694 [2024-11-20 07:18:16.106491] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df8690) 00:22:11.694 [2024-11-20 07:18:16.106496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.694 [2024-11-20 07:18:16.106506] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a580, cid 3, qid 0 00:22:11.694 [2024-11-20 07:18:16.106568] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.694 [2024-11-20 07:18:16.106573] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.694 [2024-11-20 07:18:16.106576] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.694 [2024-11-20 07:18:16.106580] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a580) on tqpair=0x1df8690 00:22:11.694 [2024-11-20 07:18:16.106588] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.694 [2024-11-20 07:18:16.106591] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.694 [2024-11-20 07:18:16.106594] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df8690) 00:22:11.694 [2024-11-20 07:18:16.106600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.694 [2024-11-20 07:18:16.106609] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a580, cid 3, qid 0 00:22:11.694 [2024-11-20 07:18:16.106714] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.694 [2024-11-20 07:18:16.106720] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.694 [2024-11-20 07:18:16.106722] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.694 [2024-11-20 07:18:16.106726] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a580) on tqpair=0x1df8690 00:22:11.694 [2024-11-20 07:18:16.106734] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.694 [2024-11-20 07:18:16.106737] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.694 [2024-11-20 07:18:16.106740] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df8690) 00:22:11.694 [2024-11-20 07:18:16.106746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.694 [2024-11-20 07:18:16.106755] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a580, cid 3, qid 0 00:22:11.694 [2024-11-20 07:18:16.106864] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.694 [2024-11-20 07:18:16.106870] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.694 [2024-11-20 07:18:16.106873] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.694 [2024-11-20 07:18:16.106876] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a580) on tqpair=0x1df8690 00:22:11.694 [2024-11-20 07:18:16.106884] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.694 [2024-11-20 07:18:16.106888] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.694 [2024-11-20 07:18:16.106891] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df8690) 00:22:11.694 [2024-11-20 07:18:16.106897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.694 [2024-11-20 07:18:16.106908] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a580, cid 3, qid 0 00:22:11.694 [2024-11-20 07:18:16.110955] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.694 [2024-11-20 07:18:16.110962] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.694 [2024-11-20 07:18:16.110965] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.694 [2024-11-20 07:18:16.110969] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a580) on tqpair=0x1df8690 00:22:11.694 [2024-11-20 07:18:16.110977] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:11.694 [2024-11-20 07:18:16.110981] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:11.694 [2024-11-20 07:18:16.110984] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1df8690) 00:22:11.694 [2024-11-20 07:18:16.110990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.694 [2024-11-20 07:18:16.111000] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5a580, cid 3, qid 0 00:22:11.694 [2024-11-20 07:18:16.111148] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:11.694 [2024-11-20 07:18:16.111154] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:11.694 [2024-11-20 07:18:16.111157] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:11.694 [2024-11-20 07:18:16.111160] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5a580) on tqpair=0x1df8690 00:22:11.694 [2024-11-20 07:18:16.111166] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:22:11.694 0% 00:22:11.694 Data Units Read: 0 00:22:11.695 Data Units Written: 0 00:22:11.695 Host Read Commands: 0 00:22:11.695 Host Write Commands: 0 00:22:11.695 Controller Busy Time: 0 minutes 00:22:11.695 Power Cycles: 0 00:22:11.695 Power On Hours: 0 hours 00:22:11.695 Unsafe Shutdowns: 0 00:22:11.695 Unrecoverable Media Errors: 0 00:22:11.695 Lifetime Error Log Entries: 0 00:22:11.695 Warning Temperature Time: 0 minutes 00:22:11.695 Critical Temperature Time: 0 minutes 00:22:11.695 00:22:11.695 Number of Queues 00:22:11.695 ================ 00:22:11.695 Number of I/O Submission Queues: 127 00:22:11.695 Number of I/O Completion Queues: 127 00:22:11.695 00:22:11.695 Active Namespaces 00:22:11.695 ================= 00:22:11.695 Namespace ID:1 00:22:11.695 Error Recovery Timeout: Unlimited 00:22:11.695 Command Set Identifier: NVM (00h) 00:22:11.695 Deallocate: Supported 00:22:11.695 Deallocated/Unwritten Error: Not Supported 00:22:11.695 Deallocated Read Value: Unknown 00:22:11.695 Deallocate in Write Zeroes: Not Supported 00:22:11.695 Deallocated Guard Field: 0xFFFF 00:22:11.695 Flush: Supported 00:22:11.695 Reservation: Supported 00:22:11.695 Namespace Sharing Capabilities: Multiple Controllers 00:22:11.695 Size (in LBAs): 131072 (0GiB) 00:22:11.695 Capacity (in LBAs): 131072 (0GiB) 00:22:11.695 Utilization (in LBAs): 131072 (0GiB) 00:22:11.695 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:11.695 EUI64: ABCDEF0123456789 00:22:11.695 UUID: cf11e2c8-92d0-46bd-afdb-9e6e0a6bd30a 00:22:11.695 Thin Provisioning: Not Supported 00:22:11.695 Per-NS Atomic Units: Yes 00:22:11.695 Atomic Boundary Size (Normal): 0 00:22:11.695 Atomic Boundary Size (PFail): 0 00:22:11.695 Atomic Boundary Offset: 0 00:22:11.695 Maximum Single Source Range Length: 65535 00:22:11.695 Maximum Copy Length: 65535 00:22:11.695 Maximum Source Range Count: 1 00:22:11.695 NGUID/EUI64 Never Reused: No 00:22:11.695 Namespace Write Protected: No 00:22:11.695 Number of LBA Formats: 1 00:22:11.695 Current LBA Format: LBA Format #00 00:22:11.695 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:11.695 00:22:11.695 07:18:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:11.695 07:18:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:11.695 07:18:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.695 07:18:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:11.695 07:18:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.695 07:18:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:11.695 07:18:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:11.695 07:18:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:11.695 07:18:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:22:11.695 07:18:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:11.695 07:18:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:22:11.695 07:18:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:11.695 07:18:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:11.695 rmmod nvme_tcp 00:22:11.695 rmmod nvme_fabrics 00:22:11.695 rmmod nvme_keyring 00:22:11.695 07:18:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:11.695 07:18:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:22:11.695 07:18:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:22:11.695 07:18:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 1272843 ']' 00:22:11.695 07:18:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 1272843 00:22:11.695 07:18:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 1272843 ']' 00:22:11.695 07:18:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 1272843 00:22:11.695 07:18:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:22:11.695 07:18:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:11.695 07:18:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1272843 00:22:11.954 07:18:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:11.954 07:18:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:11.954 07:18:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1272843' 00:22:11.954 killing process with pid 1272843 00:22:11.954 07:18:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 1272843 00:22:11.954 07:18:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 1272843 00:22:11.955 07:18:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:11.955 07:18:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:11.955 07:18:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:11.955 07:18:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:22:11.955 07:18:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:22:11.955 07:18:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:11.955 07:18:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:22:11.955 07:18:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:11.955 07:18:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:11.955 07:18:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.955 07:18:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:11.955 07:18:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.493 07:18:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:14.493 00:22:14.493 real 0m9.254s 00:22:14.493 user 0m5.329s 00:22:14.493 sys 0m4.825s 00:22:14.493 07:18:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:14.493 07:18:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:14.493 ************************************ 00:22:14.493 END TEST nvmf_identify 00:22:14.493 ************************************ 00:22:14.493 07:18:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:14.493 07:18:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:14.493 07:18:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:14.493 07:18:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.493 ************************************ 00:22:14.493 START TEST nvmf_perf 00:22:14.493 ************************************ 00:22:14.493 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:14.493 * Looking for test storage... 00:22:14.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:14.493 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:14.493 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:22:14.493 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:14.493 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:14.493 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:14.493 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:14.493 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:14.493 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:22:14.493 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:22:14.493 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:22:14.493 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:22:14.493 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:22:14.493 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:22:14.493 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:22:14.493 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:14.493 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:22:14.493 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:22:14.493 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:14.493 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:14.493 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:22:14.493 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:22:14.493 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:14.493 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:22:14.493 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:14.493 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:22:14.493 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:22:14.493 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:14.493 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:22:14.493 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:14.493 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:14.493 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:14.493 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:22:14.493 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:14.493 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:14.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:14.493 --rc genhtml_branch_coverage=1 00:22:14.493 --rc genhtml_function_coverage=1 00:22:14.493 --rc genhtml_legend=1 00:22:14.493 --rc geninfo_all_blocks=1 00:22:14.493 --rc geninfo_unexecuted_blocks=1 00:22:14.493 00:22:14.494 ' 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:14.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:14.494 --rc genhtml_branch_coverage=1 00:22:14.494 --rc genhtml_function_coverage=1 00:22:14.494 --rc genhtml_legend=1 00:22:14.494 --rc geninfo_all_blocks=1 00:22:14.494 --rc geninfo_unexecuted_blocks=1 00:22:14.494 00:22:14.494 ' 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:14.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:14.494 --rc genhtml_branch_coverage=1 00:22:14.494 --rc genhtml_function_coverage=1 00:22:14.494 --rc genhtml_legend=1 00:22:14.494 --rc geninfo_all_blocks=1 00:22:14.494 --rc geninfo_unexecuted_blocks=1 00:22:14.494 00:22:14.494 ' 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:14.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:14.494 --rc genhtml_branch_coverage=1 00:22:14.494 --rc genhtml_function_coverage=1 00:22:14.494 --rc genhtml_legend=1 00:22:14.494 --rc geninfo_all_blocks=1 00:22:14.494 --rc geninfo_unexecuted_blocks=1 00:22:14.494 00:22:14.494 ' 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:14.494 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.494 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:14.495 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:14.495 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:14.495 07:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:21.065 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:21.065 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:21.065 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:21.065 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:21.065 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:21.065 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:21.065 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:21.065 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:22:21.065 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:21.066 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:21.066 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:21.066 Found net devices under 0000:86:00.0: cvl_0_0 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:21.066 Found net devices under 0000:86:00.1: cvl_0_1 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:21.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:21.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:22:21.066 00:22:21.066 --- 10.0.0.2 ping statistics --- 00:22:21.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.066 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:22:21.066 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:21.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:21.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:22:21.066 00:22:21.067 --- 10.0.0.1 ping statistics --- 00:22:21.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.067 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:22:21.067 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:21.067 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:22:21.067 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:21.067 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:21.067 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:21.067 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:21.067 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:21.067 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:21.067 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:21.067 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:21.067 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:21.067 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:21.067 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:21.067 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=1276404 00:22:21.067 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 1276404 00:22:21.067 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:21.067 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 1276404 ']' 00:22:21.067 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:21.067 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:21.067 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:21.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:21.067 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:21.067 07:18:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:21.067 [2024-11-20 07:18:24.747336] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:22:21.067 [2024-11-20 07:18:24.747382] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:21.067 [2024-11-20 07:18:24.819179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:21.067 [2024-11-20 07:18:24.872216] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:21.067 [2024-11-20 07:18:24.872261] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:21.067 [2024-11-20 07:18:24.872271] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:21.067 [2024-11-20 07:18:24.872278] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:21.067 [2024-11-20 07:18:24.872284] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:21.067 [2024-11-20 07:18:24.874353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:21.067 [2024-11-20 07:18:24.874465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:21.067 [2024-11-20 07:18:24.874573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:21.067 [2024-11-20 07:18:24.874575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:21.067 07:18:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:21.067 07:18:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:22:21.067 07:18:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:21.067 07:18:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:21.067 07:18:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:21.326 07:18:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:21.326 07:18:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:21.326 07:18:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:24.616 07:18:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:24.616 07:18:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:24.616 07:18:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:22:24.616 07:18:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:24.616 07:18:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:24.616 07:18:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:22:24.616 07:18:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:24.616 07:18:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:24.616 07:18:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:24.874 [2024-11-20 07:18:29.284697] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:24.874 07:18:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:25.133 07:18:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:25.133 07:18:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:25.392 07:18:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:25.392 07:18:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:25.650 07:18:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:25.650 [2024-11-20 07:18:30.119929] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:25.650 07:18:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:25.909 07:18:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:22:25.909 07:18:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:25.909 07:18:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:25.909 07:18:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:27.286 Initializing NVMe Controllers 00:22:27.286 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:22:27.286 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:22:27.286 Initialization complete. Launching workers. 00:22:27.286 ======================================================== 00:22:27.286 Latency(us) 00:22:27.286 Device Information : IOPS MiB/s Average min max 00:22:27.286 PCIE (0000:5e:00.0) NSID 1 from core 0: 97450.82 380.67 328.04 30.48 4468.21 00:22:27.286 ======================================================== 00:22:27.286 Total : 97450.82 380.67 328.04 30.48 4468.21 00:22:27.286 00:22:27.286 07:18:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:28.665 Initializing NVMe Controllers 00:22:28.665 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:28.665 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:28.665 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:28.665 Initialization complete. Launching workers. 00:22:28.665 ======================================================== 00:22:28.665 Latency(us) 00:22:28.665 Device Information : IOPS MiB/s Average min max 00:22:28.665 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 94.00 0.37 10945.50 109.76 45689.48 00:22:28.665 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 46.00 0.18 21820.32 5991.31 47907.51 00:22:28.665 ======================================================== 00:22:28.665 Total : 140.00 0.55 14518.66 109.76 47907.51 00:22:28.665 00:22:28.665 07:18:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:29.603 Initializing NVMe Controllers 00:22:29.604 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:29.604 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:29.604 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:29.604 Initialization complete. Launching workers. 00:22:29.604 ======================================================== 00:22:29.604 Latency(us) 00:22:29.604 Device Information : IOPS MiB/s Average min max 00:22:29.604 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11025.88 43.07 2902.60 541.49 6238.86 00:22:29.604 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3807.50 14.87 8419.46 4823.46 16011.69 00:22:29.604 ======================================================== 00:22:29.604 Total : 14833.38 57.94 4318.69 541.49 16011.69 00:22:29.604 00:22:29.604 07:18:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:29.604 07:18:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:29.604 07:18:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:32.138 Initializing NVMe Controllers 00:22:32.138 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:32.138 Controller IO queue size 128, less than required. 00:22:32.138 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:32.138 Controller IO queue size 128, less than required. 00:22:32.138 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:32.138 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:32.138 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:32.138 Initialization complete. Launching workers. 00:22:32.138 ======================================================== 00:22:32.138 Latency(us) 00:22:32.138 Device Information : IOPS MiB/s Average min max 00:22:32.138 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1749.10 437.28 74433.73 45582.96 113031.30 00:22:32.138 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 603.50 150.87 219810.42 79969.02 333072.02 00:22:32.138 ======================================================== 00:22:32.138 Total : 2352.60 588.15 111726.41 45582.96 333072.02 00:22:32.138 00:22:32.138 07:18:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:32.703 No valid NVMe controllers or AIO or URING devices found 00:22:32.703 Initializing NVMe Controllers 00:22:32.703 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:32.703 Controller IO queue size 128, less than required. 00:22:32.703 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:32.703 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:32.703 Controller IO queue size 128, less than required. 00:22:32.703 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:32.703 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:32.703 WARNING: Some requested NVMe devices were skipped 00:22:32.703 07:18:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:35.237 Initializing NVMe Controllers 00:22:35.238 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:35.238 Controller IO queue size 128, less than required. 00:22:35.238 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:35.238 Controller IO queue size 128, less than required. 00:22:35.238 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:35.238 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:35.238 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:35.238 Initialization complete. Launching workers. 00:22:35.238 00:22:35.238 ==================== 00:22:35.238 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:35.238 TCP transport: 00:22:35.238 polls: 14928 00:22:35.238 idle_polls: 11688 00:22:35.238 sock_completions: 3240 00:22:35.238 nvme_completions: 6161 00:22:35.238 submitted_requests: 9314 00:22:35.238 queued_requests: 1 00:22:35.238 00:22:35.238 ==================== 00:22:35.238 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:35.238 TCP transport: 00:22:35.238 polls: 15537 00:22:35.238 idle_polls: 11867 00:22:35.238 sock_completions: 3670 00:22:35.238 nvme_completions: 6395 00:22:35.238 submitted_requests: 9524 00:22:35.238 queued_requests: 1 00:22:35.238 ======================================================== 00:22:35.238 Latency(us) 00:22:35.238 Device Information : IOPS MiB/s Average min max 00:22:35.238 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1539.14 384.79 85060.40 62311.08 144587.45 00:22:35.238 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1597.61 399.40 80872.57 46966.16 133653.44 00:22:35.238 ======================================================== 00:22:35.238 Total : 3136.75 784.19 82927.46 46966.16 144587.45 00:22:35.238 00:22:35.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:35.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:35.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:35.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:35.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:35.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:35.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:22:35.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:35.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:22:35.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:35.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:35.238 rmmod nvme_tcp 00:22:35.238 rmmod nvme_fabrics 00:22:35.238 rmmod nvme_keyring 00:22:35.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:35.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:22:35.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:22:35.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 1276404 ']' 00:22:35.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 1276404 00:22:35.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 1276404 ']' 00:22:35.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 1276404 00:22:35.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:22:35.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:35.238 07:18:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1276404 00:22:35.497 07:18:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:35.497 07:18:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:35.497 07:18:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1276404' 00:22:35.497 killing process with pid 1276404 00:22:35.497 07:18:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 1276404 00:22:35.497 07:18:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 1276404 00:22:36.875 07:18:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:36.875 07:18:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:36.875 07:18:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:36.875 07:18:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:22:36.875 07:18:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:22:36.875 07:18:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:36.875 07:18:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:22:36.875 07:18:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:36.875 07:18:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:36.875 07:18:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.875 07:18:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:36.875 07:18:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:39.466 00:22:39.466 real 0m24.801s 00:22:39.466 user 1m5.606s 00:22:39.466 sys 0m8.241s 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:39.466 ************************************ 00:22:39.466 END TEST nvmf_perf 00:22:39.466 ************************************ 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.466 ************************************ 00:22:39.466 START TEST nvmf_fio_host 00:22:39.466 ************************************ 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:39.466 * Looking for test storage... 00:22:39.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:39.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.466 --rc genhtml_branch_coverage=1 00:22:39.466 --rc genhtml_function_coverage=1 00:22:39.466 --rc genhtml_legend=1 00:22:39.466 --rc geninfo_all_blocks=1 00:22:39.466 --rc geninfo_unexecuted_blocks=1 00:22:39.466 00:22:39.466 ' 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:39.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.466 --rc genhtml_branch_coverage=1 00:22:39.466 --rc genhtml_function_coverage=1 00:22:39.466 --rc genhtml_legend=1 00:22:39.466 --rc geninfo_all_blocks=1 00:22:39.466 --rc geninfo_unexecuted_blocks=1 00:22:39.466 00:22:39.466 ' 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:39.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.466 --rc genhtml_branch_coverage=1 00:22:39.466 --rc genhtml_function_coverage=1 00:22:39.466 --rc genhtml_legend=1 00:22:39.466 --rc geninfo_all_blocks=1 00:22:39.466 --rc geninfo_unexecuted_blocks=1 00:22:39.466 00:22:39.466 ' 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:39.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.466 --rc genhtml_branch_coverage=1 00:22:39.466 --rc genhtml_function_coverage=1 00:22:39.466 --rc genhtml_legend=1 00:22:39.466 --rc geninfo_all_blocks=1 00:22:39.466 --rc geninfo_unexecuted_blocks=1 00:22:39.466 00:22:39.466 ' 00:22:39.466 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:39.467 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:39.467 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:39.467 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:39.467 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:39.467 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.467 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.467 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.467 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:39.467 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.467 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:39.467 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:39.467 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:39.467 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:39.467 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:39.467 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:39.467 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:39.467 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:39.467 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:39.467 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:39.467 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:39.467 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:39.467 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:39.467 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:39.467 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:39.467 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:39.467 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:39.467 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:39.467 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:39.467 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:39.467 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:39.467 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:39.467 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:39.467 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.467 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.467 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.467 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:39.467 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.467 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:22:39.467 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:39.467 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:39.467 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:39.468 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:39.468 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:39.468 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:39.468 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:39.468 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:39.468 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:39.468 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:39.468 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:39.468 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:39.468 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:39.468 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:39.468 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:39.468 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:39.468 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:39.468 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.468 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:39.468 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:39.468 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:39.468 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:39.468 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:22:39.468 07:18:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.800 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:44.800 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:22:44.800 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:44.800 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:44.800 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:44.800 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:44.800 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:44.800 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:22:44.800 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:44.800 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:22:44.800 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:22:44.800 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:22:44.800 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:22:44.800 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:22:44.800 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:22:44.800 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:44.800 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:44.800 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:44.800 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:44.800 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:44.800 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:44.800 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:44.800 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:44.800 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:44.800 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:44.800 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:44.800 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:44.800 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:44.800 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:44.800 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:44.800 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:44.800 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:44.800 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:44.801 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:44.801 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:44.801 Found net devices under 0000:86:00.0: cvl_0_0 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:44.801 Found net devices under 0000:86:00.1: cvl_0_1 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:44.801 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:45.060 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:45.060 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:45.060 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:45.060 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:45.060 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:45.060 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:45.060 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:45.060 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:45.060 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:45.060 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:22:45.060 00:22:45.060 --- 10.0.0.2 ping statistics --- 00:22:45.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.060 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:22:45.060 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:45.060 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:45.060 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:22:45.060 00:22:45.060 --- 10.0.0.1 ping statistics --- 00:22:45.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.060 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:22:45.060 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:45.060 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:22:45.060 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:45.060 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:45.060 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:45.060 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:45.060 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:45.060 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:45.060 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:45.060 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:45.060 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:45.060 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:45.060 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.060 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1282606 00:22:45.060 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:45.060 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:45.060 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1282606 00:22:45.060 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 1282606 ']' 00:22:45.060 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.060 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:45.060 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.060 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:45.060 07:18:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.319 [2024-11-20 07:18:49.627878] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:22:45.320 [2024-11-20 07:18:49.627926] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:45.320 [2024-11-20 07:18:49.703233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:45.320 [2024-11-20 07:18:49.759906] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:45.320 [2024-11-20 07:18:49.759959] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:45.320 [2024-11-20 07:18:49.759973] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:45.320 [2024-11-20 07:18:49.759984] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:45.320 [2024-11-20 07:18:49.759992] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:45.320 [2024-11-20 07:18:49.762095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.320 [2024-11-20 07:18:49.762204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:45.320 [2024-11-20 07:18:49.762315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:45.320 [2024-11-20 07:18:49.762316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.256 07:18:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:46.256 07:18:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:22:46.256 07:18:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:46.256 [2024-11-20 07:18:50.643089] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:46.256 07:18:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:46.256 07:18:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:46.256 07:18:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.256 07:18:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:46.514 Malloc1 00:22:46.514 07:18:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:46.773 07:18:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:47.032 07:18:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:47.032 [2024-11-20 07:18:51.530619] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:47.032 07:18:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:47.290 07:18:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:47.290 07:18:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:47.290 07:18:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:47.290 07:18:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:22:47.290 07:18:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:47.290 07:18:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:22:47.290 07:18:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:47.290 07:18:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:22:47.290 07:18:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:22:47.290 07:18:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:22:47.290 07:18:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:47.291 07:18:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:22:47.291 07:18:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:22:47.291 07:18:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:22:47.291 07:18:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:22:47.291 07:18:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:22:47.291 07:18:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:47.291 07:18:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:22:47.291 07:18:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:22:47.291 07:18:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:22:47.291 07:18:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:22:47.291 07:18:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:47.291 07:18:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:47.549 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:47.549 fio-3.35 00:22:47.549 Starting 1 thread 00:22:50.082 [2024-11-20 07:18:54.357663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec5f00 is same with the state(6) to be set 00:22:50.082 [2024-11-20 07:18:54.357712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec5f00 is same with the state(6) to be set 00:22:50.082 [2024-11-20 07:18:54.357725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec5f00 is same with the state(6) to be set 00:22:50.082 [2024-11-20 07:18:54.357732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec5f00 is same with the state(6) to be set 00:22:50.082 [2024-11-20 07:18:54.357738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec5f00 is same with the state(6) to be set 00:22:50.082 00:22:50.083 test: (groupid=0, jobs=1): err= 0: pid=1283125: Wed Nov 20 07:18:54 2024 00:22:50.083 read: IOPS=11.5k, BW=45.1MiB/s (47.3MB/s)(90.4MiB/2005msec) 00:22:50.083 slat (nsec): min=1581, max=242288, avg=1741.55, stdev=2283.04 00:22:50.083 clat (usec): min=3130, max=10433, avg=6139.85, stdev=446.15 00:22:50.083 lat (usec): min=3161, max=10434, avg=6141.59, stdev=446.02 00:22:50.083 clat percentiles (usec): 00:22:50.083 | 1.00th=[ 5080], 5.00th=[ 5407], 10.00th=[ 5604], 20.00th=[ 5800], 00:22:50.083 | 30.00th=[ 5932], 40.00th=[ 6063], 50.00th=[ 6128], 60.00th=[ 6259], 00:22:50.083 | 70.00th=[ 6390], 80.00th=[ 6521], 90.00th=[ 6652], 95.00th=[ 6849], 00:22:50.083 | 99.00th=[ 7111], 99.50th=[ 7177], 99.90th=[ 8356], 99.95th=[ 9765], 00:22:50.083 | 99.99th=[10421] 00:22:50.083 bw ( KiB/s): min=45040, max=46720, per=99.96%, avg=46156.00, stdev=790.09, samples=4 00:22:50.083 iops : min=11260, max=11680, avg=11539.00, stdev=197.52, samples=4 00:22:50.083 write: IOPS=11.5k, BW=44.8MiB/s (47.0MB/s)(89.8MiB/2005msec); 0 zone resets 00:22:50.083 slat (nsec): min=1631, max=224539, avg=1811.79, stdev=1641.38 00:22:50.083 clat (usec): min=2436, max=9633, avg=4931.97, stdev=374.26 00:22:50.083 lat (usec): min=2451, max=9635, avg=4933.78, stdev=374.22 00:22:50.083 clat percentiles (usec): 00:22:50.083 | 1.00th=[ 4080], 5.00th=[ 4359], 10.00th=[ 4490], 20.00th=[ 4621], 00:22:50.083 | 30.00th=[ 4752], 40.00th=[ 4817], 50.00th=[ 4948], 60.00th=[ 5014], 00:22:50.083 | 70.00th=[ 5080], 80.00th=[ 5211], 90.00th=[ 5407], 95.00th=[ 5473], 00:22:50.083 | 99.00th=[ 5735], 99.50th=[ 5866], 99.90th=[ 8160], 99.95th=[ 8979], 00:22:50.083 | 99.99th=[ 9634] 00:22:50.083 bw ( KiB/s): min=45440, max=46400, per=99.99%, avg=45858.00, stdev=489.35, samples=4 00:22:50.083 iops : min=11360, max=11600, avg=11464.50, stdev=122.34, samples=4 00:22:50.083 lat (msec) : 4=0.29%, 10=99.69%, 20=0.02% 00:22:50.083 cpu : usr=74.80%, sys=24.20%, ctx=74, majf=0, minf=3 00:22:50.083 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:50.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:50.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:50.083 issued rwts: total=23145,22989,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:50.083 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:50.083 00:22:50.083 Run status group 0 (all jobs): 00:22:50.083 READ: bw=45.1MiB/s (47.3MB/s), 45.1MiB/s-45.1MiB/s (47.3MB/s-47.3MB/s), io=90.4MiB (94.8MB), run=2005-2005msec 00:22:50.083 WRITE: bw=44.8MiB/s (47.0MB/s), 44.8MiB/s-44.8MiB/s (47.0MB/s-47.0MB/s), io=89.8MiB (94.2MB), run=2005-2005msec 00:22:50.083 07:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:50.083 07:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:50.083 07:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:22:50.083 07:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:50.083 07:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:22:50.083 07:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:50.083 07:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:22:50.083 07:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:22:50.083 07:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:22:50.083 07:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:50.083 07:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:22:50.083 07:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:22:50.083 07:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:22:50.083 07:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:22:50.083 07:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:22:50.083 07:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:50.083 07:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:22:50.083 07:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:22:50.083 07:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:22:50.083 07:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:22:50.083 07:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:50.083 07:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:50.341 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:50.341 fio-3.35 00:22:50.341 Starting 1 thread 00:22:52.874 00:22:52.874 test: (groupid=0, jobs=1): err= 0: pid=1283684: Wed Nov 20 07:18:57 2024 00:22:52.874 read: IOPS=10.8k, BW=168MiB/s (176MB/s)(338MiB/2007msec) 00:22:52.874 slat (usec): min=2, max=102, avg= 2.83, stdev= 1.36 00:22:52.874 clat (usec): min=881, max=14380, avg=6743.70, stdev=1562.08 00:22:52.874 lat (usec): min=884, max=14394, avg=6746.53, stdev=1562.25 00:22:52.874 clat percentiles (usec): 00:22:52.874 | 1.00th=[ 3621], 5.00th=[ 4359], 10.00th=[ 4817], 20.00th=[ 5407], 00:22:52.874 | 30.00th=[ 5800], 40.00th=[ 6194], 50.00th=[ 6652], 60.00th=[ 7177], 00:22:52.874 | 70.00th=[ 7570], 80.00th=[ 7898], 90.00th=[ 8717], 95.00th=[ 9503], 00:22:52.874 | 99.00th=[10814], 99.50th=[11338], 99.90th=[13304], 99.95th=[13829], 00:22:52.874 | 99.99th=[14353] 00:22:52.874 bw ( KiB/s): min=81856, max=98176, per=50.91%, avg=87696.00, stdev=7564.40, samples=4 00:22:52.874 iops : min= 5116, max= 6136, avg=5481.00, stdev=472.77, samples=4 00:22:52.874 write: IOPS=6356, BW=99.3MiB/s (104MB/s)(180MiB/1808msec); 0 zone resets 00:22:52.874 slat (usec): min=29, max=381, avg=31.71, stdev= 7.61 00:22:52.874 clat (usec): min=2467, max=15581, avg=8832.73, stdev=1506.18 00:22:52.874 lat (usec): min=2497, max=15692, avg=8864.43, stdev=1508.16 00:22:52.874 clat percentiles (usec): 00:22:52.874 | 1.00th=[ 5866], 5.00th=[ 6652], 10.00th=[ 7111], 20.00th=[ 7635], 00:22:52.874 | 30.00th=[ 7963], 40.00th=[ 8291], 50.00th=[ 8717], 60.00th=[ 9110], 00:22:52.874 | 70.00th=[ 9503], 80.00th=[10028], 90.00th=[10945], 95.00th=[11469], 00:22:52.874 | 99.00th=[12780], 99.50th=[13566], 99.90th=[15270], 99.95th=[15533], 00:22:52.874 | 99.99th=[15533] 00:22:52.874 bw ( KiB/s): min=84224, max=102528, per=90.05%, avg=91576.00, stdev=7984.21, samples=4 00:22:52.874 iops : min= 5264, max= 6408, avg=5723.50, stdev=499.01, samples=4 00:22:52.874 lat (usec) : 1000=0.01% 00:22:52.874 lat (msec) : 2=0.01%, 4=1.61%, 10=89.44%, 20=8.93% 00:22:52.874 cpu : usr=85.95%, sys=13.35%, ctx=35, majf=0, minf=3 00:22:52.874 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:52.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:52.874 issued rwts: total=21608,11492,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:52.874 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:52.874 00:22:52.874 Run status group 0 (all jobs): 00:22:52.874 READ: bw=168MiB/s (176MB/s), 168MiB/s-168MiB/s (176MB/s-176MB/s), io=338MiB (354MB), run=2007-2007msec 00:22:52.874 WRITE: bw=99.3MiB/s (104MB/s), 99.3MiB/s-99.3MiB/s (104MB/s-104MB/s), io=180MiB (188MB), run=1808-1808msec 00:22:52.874 07:18:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:52.874 07:18:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:52.874 07:18:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:52.874 07:18:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:52.874 07:18:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:52.874 07:18:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:52.874 07:18:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:22:52.874 07:18:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:52.874 07:18:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:22:52.874 07:18:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:52.874 07:18:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:52.874 rmmod nvme_tcp 00:22:52.874 rmmod nvme_fabrics 00:22:52.874 rmmod nvme_keyring 00:22:52.874 07:18:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:52.874 07:18:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:22:52.874 07:18:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:22:52.874 07:18:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 1282606 ']' 00:22:52.874 07:18:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 1282606 00:22:52.874 07:18:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 1282606 ']' 00:22:52.874 07:18:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 1282606 00:22:52.874 07:18:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:22:52.874 07:18:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:52.874 07:18:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1282606 00:22:52.874 07:18:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:52.874 07:18:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:52.874 07:18:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1282606' 00:22:52.874 killing process with pid 1282606 00:22:52.874 07:18:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 1282606 00:22:52.874 07:18:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 1282606 00:22:53.134 07:18:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:53.134 07:18:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:53.134 07:18:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:53.134 07:18:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:22:53.134 07:18:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:22:53.134 07:18:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:22:53.134 07:18:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:53.134 07:18:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:53.134 07:18:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:53.134 07:18:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.134 07:18:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:53.134 07:18:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.672 07:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:55.672 00:22:55.672 real 0m16.220s 00:22:55.672 user 0m47.806s 00:22:55.672 sys 0m6.412s 00:22:55.672 07:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:55.672 07:18:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.672 ************************************ 00:22:55.672 END TEST nvmf_fio_host 00:22:55.672 ************************************ 00:22:55.672 07:18:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:55.672 07:18:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:55.672 07:18:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:55.672 07:18:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.672 ************************************ 00:22:55.672 START TEST nvmf_failover 00:22:55.672 ************************************ 00:22:55.672 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:55.672 * Looking for test storage... 00:22:55.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:55.672 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:55.672 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:22:55.672 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:55.672 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:55.672 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:55.672 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:55.672 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:55.672 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:22:55.672 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:22:55.672 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:22:55.672 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:22:55.672 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:22:55.672 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:22:55.672 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:22:55.672 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:55.672 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:22:55.672 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:22:55.672 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:55.672 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:55.672 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:22:55.672 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:22:55.672 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:55.672 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:22:55.672 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:22:55.672 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:22:55.672 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:22:55.672 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:55.672 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:22:55.672 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:22:55.672 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:55.672 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:55.672 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:22:55.672 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:55.672 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:55.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.672 --rc genhtml_branch_coverage=1 00:22:55.672 --rc genhtml_function_coverage=1 00:22:55.672 --rc genhtml_legend=1 00:22:55.672 --rc geninfo_all_blocks=1 00:22:55.672 --rc geninfo_unexecuted_blocks=1 00:22:55.672 00:22:55.672 ' 00:22:55.672 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:55.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.672 --rc genhtml_branch_coverage=1 00:22:55.672 --rc genhtml_function_coverage=1 00:22:55.672 --rc genhtml_legend=1 00:22:55.672 --rc geninfo_all_blocks=1 00:22:55.672 --rc geninfo_unexecuted_blocks=1 00:22:55.672 00:22:55.672 ' 00:22:55.672 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:55.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.673 --rc genhtml_branch_coverage=1 00:22:55.673 --rc genhtml_function_coverage=1 00:22:55.673 --rc genhtml_legend=1 00:22:55.673 --rc geninfo_all_blocks=1 00:22:55.673 --rc geninfo_unexecuted_blocks=1 00:22:55.673 00:22:55.673 ' 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:55.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:55.673 --rc genhtml_branch_coverage=1 00:22:55.673 --rc genhtml_function_coverage=1 00:22:55.673 --rc genhtml_legend=1 00:22:55.673 --rc geninfo_all_blocks=1 00:22:55.673 --rc geninfo_unexecuted_blocks=1 00:22:55.673 00:22:55.673 ' 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:55.673 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:22:55.673 07:18:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:02.245 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:02.245 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:02.245 Found net devices under 0000:86:00.0: cvl_0_0 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:02.245 Found net devices under 0000:86:00.1: cvl_0_1 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:02.245 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:02.246 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:02.246 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:02.246 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:02.246 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:02.246 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:02.246 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:02.246 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.462 ms 00:23:02.246 00:23:02.246 --- 10.0.0.2 ping statistics --- 00:23:02.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.246 rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms 00:23:02.246 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:02.246 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:02.246 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:23:02.246 00:23:02.246 --- 10.0.0.1 ping statistics --- 00:23:02.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.246 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:23:02.246 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:02.246 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:23:02.246 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:02.246 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:02.246 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:02.246 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:02.246 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:02.246 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:02.246 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:02.246 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:02.246 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:02.246 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:02.246 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:02.246 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=1287658 00:23:02.246 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:02.246 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 1287658 00:23:02.246 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 1287658 ']' 00:23:02.246 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:02.246 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:02.246 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:02.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:02.246 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:02.246 07:19:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:02.246 [2024-11-20 07:19:05.932275] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:23:02.246 [2024-11-20 07:19:05.932324] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:02.246 [2024-11-20 07:19:06.014237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:02.246 [2024-11-20 07:19:06.055488] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:02.246 [2024-11-20 07:19:06.055527] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:02.246 [2024-11-20 07:19:06.055535] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:02.246 [2024-11-20 07:19:06.055542] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:02.246 [2024-11-20 07:19:06.055547] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:02.246 [2024-11-20 07:19:06.056991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:02.246 [2024-11-20 07:19:06.057097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:02.246 [2024-11-20 07:19:06.057098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:02.246 07:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:02.246 07:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:23:02.246 07:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:02.246 07:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:02.246 07:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:02.246 07:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:02.246 07:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:02.246 [2024-11-20 07:19:06.363755] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:02.246 07:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:02.246 Malloc0 00:23:02.246 07:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:02.504 07:19:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:02.504 07:19:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:02.762 [2024-11-20 07:19:07.196357] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:02.762 07:19:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:03.020 [2024-11-20 07:19:07.396871] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:03.020 07:19:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:03.279 [2024-11-20 07:19:07.597533] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:03.279 07:19:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1287917 00:23:03.279 07:19:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:03.279 07:19:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:03.279 07:19:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1287917 /var/tmp/bdevperf.sock 00:23:03.279 07:19:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 1287917 ']' 00:23:03.279 07:19:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:03.279 07:19:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:03.279 07:19:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:03.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:03.279 07:19:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:03.279 07:19:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:03.537 07:19:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:03.537 07:19:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:23:03.537 07:19:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:03.796 NVMe0n1 00:23:04.054 07:19:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:04.312 00:23:04.312 07:19:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1288146 00:23:04.312 07:19:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:04.312 07:19:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:05.246 07:19:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:05.504 [2024-11-20 07:19:09.849859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12202d0 is same with the state(6) to be set 00:23:05.504 [2024-11-20 07:19:09.849933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12202d0 is same with the state(6) to be set 00:23:05.504 [2024-11-20 07:19:09.849946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12202d0 is same with the state(6) to be set 00:23:05.504 [2024-11-20 07:19:09.849958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12202d0 is same with the state(6) to be set 00:23:05.504 [2024-11-20 07:19:09.849964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12202d0 is same with the state(6) to be set 00:23:05.504 [2024-11-20 07:19:09.849971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12202d0 is same with the state(6) to be set 00:23:05.504 [2024-11-20 07:19:09.849977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12202d0 is same with the state(6) to be set 00:23:05.504 07:19:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:08.785 07:19:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:08.785 00:23:08.785 07:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:09.043 07:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:12.323 07:19:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:12.323 [2024-11-20 07:19:16.698964] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:12.323 07:19:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:13.257 07:19:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:13.516 [2024-11-20 07:19:17.918021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.516 [2024-11-20 07:19:17.918061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.516 [2024-11-20 07:19:17.918069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.516 [2024-11-20 07:19:17.918075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.516 [2024-11-20 07:19:17.918081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.516 [2024-11-20 07:19:17.918088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.516 [2024-11-20 07:19:17.918094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.516 [2024-11-20 07:19:17.918099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.516 [2024-11-20 07:19:17.918105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.516 [2024-11-20 07:19:17.918111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.516 [2024-11-20 07:19:17.918117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.516 [2024-11-20 07:19:17.918123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.516 [2024-11-20 07:19:17.918129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.517 [2024-11-20 07:19:17.918575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.518 [2024-11-20 07:19:17.918581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.518 [2024-11-20 07:19:17.918586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.518 [2024-11-20 07:19:17.918592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.518 [2024-11-20 07:19:17.918597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.518 [2024-11-20 07:19:17.918603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.518 [2024-11-20 07:19:17.918609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.518 [2024-11-20 07:19:17.918614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.518 [2024-11-20 07:19:17.918620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.518 [2024-11-20 07:19:17.918625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.518 [2024-11-20 07:19:17.918631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.518 [2024-11-20 07:19:17.918636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.518 [2024-11-20 07:19:17.918642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.518 [2024-11-20 07:19:17.918647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.518 [2024-11-20 07:19:17.918655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.518 [2024-11-20 07:19:17.918661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.518 [2024-11-20 07:19:17.918667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.518 [2024-11-20 07:19:17.918676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.518 [2024-11-20 07:19:17.918682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.518 [2024-11-20 07:19:17.918688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.518 [2024-11-20 07:19:17.918693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.518 [2024-11-20 07:19:17.918699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.518 [2024-11-20 07:19:17.918705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x136d340 is same with the state(6) to be set 00:23:13.518 07:19:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1288146 00:23:20.104 { 00:23:20.104 "results": [ 00:23:20.104 { 00:23:20.104 "job": "NVMe0n1", 00:23:20.104 "core_mask": "0x1", 00:23:20.104 "workload": "verify", 00:23:20.104 "status": "finished", 00:23:20.104 "verify_range": { 00:23:20.104 "start": 0, 00:23:20.104 "length": 16384 00:23:20.104 }, 00:23:20.104 "queue_depth": 128, 00:23:20.104 "io_size": 4096, 00:23:20.104 "runtime": 15.00913, 00:23:20.104 "iops": 11073.126823473445, 00:23:20.104 "mibps": 43.254401654193146, 00:23:20.104 "io_failed": 6181, 00:23:20.104 "io_timeout": 0, 00:23:20.104 "avg_latency_us": 11122.594132917937, 00:23:20.104 "min_latency_us": 439.8747826086956, 00:23:20.104 "max_latency_us": 16184.542608695652 00:23:20.104 } 00:23:20.104 ], 00:23:20.104 "core_count": 1 00:23:20.104 } 00:23:20.104 07:19:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1287917 00:23:20.104 07:19:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 1287917 ']' 00:23:20.104 07:19:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 1287917 00:23:20.104 07:19:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:23:20.104 07:19:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:20.104 07:19:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1287917 00:23:20.104 07:19:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:20.104 07:19:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:20.104 07:19:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1287917' 00:23:20.104 killing process with pid 1287917 00:23:20.104 07:19:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 1287917 00:23:20.104 07:19:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 1287917 00:23:20.104 07:19:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:20.104 [2024-11-20 07:19:07.674318] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:23:20.104 [2024-11-20 07:19:07.674371] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1287917 ] 00:23:20.104 [2024-11-20 07:19:07.751834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.104 [2024-11-20 07:19:07.793823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.104 Running I/O for 15 seconds... 00:23:20.104 11068.00 IOPS, 43.23 MiB/s [2024-11-20T06:19:24.660Z] [2024-11-20 07:19:09.850179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.104 [2024-11-20 07:19:09.850216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.104 [2024-11-20 07:19:09.850233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.104 [2024-11-20 07:19:09.850241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.104 [2024-11-20 07:19:09.850251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.104 [2024-11-20 07:19:09.850258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.104 [2024-11-20 07:19:09.850267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.104 [2024-11-20 07:19:09.850274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.104 [2024-11-20 07:19:09.850282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.104 [2024-11-20 07:19:09.850288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.104 [2024-11-20 07:19:09.850297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.104 [2024-11-20 07:19:09.850304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.104 [2024-11-20 07:19:09.850312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.104 [2024-11-20 07:19:09.850319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.104 [2024-11-20 07:19:09.850328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.104 [2024-11-20 07:19:09.850334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.104 [2024-11-20 07:19:09.850343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.104 [2024-11-20 07:19:09.850350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.104 [2024-11-20 07:19:09.850359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.104 [2024-11-20 07:19:09.850366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.104 [2024-11-20 07:19:09.850374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.104 [2024-11-20 07:19:09.850381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.104 [2024-11-20 07:19:09.850395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.104 [2024-11-20 07:19:09.850402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.104 [2024-11-20 07:19:09.850410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.104 [2024-11-20 07:19:09.850416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.104 [2024-11-20 07:19:09.850424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.104 [2024-11-20 07:19:09.850431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.104 [2024-11-20 07:19:09.850439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.104 [2024-11-20 07:19:09.850445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.104 [2024-11-20 07:19:09.850454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.104 [2024-11-20 07:19:09.850460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.104 [2024-11-20 07:19:09.850468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.104 [2024-11-20 07:19:09.850476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.104 [2024-11-20 07:19:09.850484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.104 [2024-11-20 07:19:09.850491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.104 [2024-11-20 07:19:09.850499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.105 [2024-11-20 07:19:09.850505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.105 [2024-11-20 07:19:09.850513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.105 [2024-11-20 07:19:09.850520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.105 [2024-11-20 07:19:09.850528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.105 [2024-11-20 07:19:09.850534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.105 [2024-11-20 07:19:09.850542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.105 [2024-11-20 07:19:09.850550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.105 [2024-11-20 07:19:09.850558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.105 [2024-11-20 07:19:09.850565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.105 [2024-11-20 07:19:09.850574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.105 [2024-11-20 07:19:09.850582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.105 [2024-11-20 07:19:09.850590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.105 [2024-11-20 07:19:09.850597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.105 [2024-11-20 07:19:09.850605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.105 [2024-11-20 07:19:09.850612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.105 [2024-11-20 07:19:09.850620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.105 [2024-11-20 07:19:09.850626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.105 [2024-11-20 07:19:09.850634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.105 [2024-11-20 07:19:09.850640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.105 [2024-11-20 07:19:09.850649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.105 [2024-11-20 07:19:09.850656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.105 [2024-11-20 07:19:09.850664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.105 [2024-11-20 07:19:09.850671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.105 [2024-11-20 07:19:09.850679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.105 [2024-11-20 07:19:09.850686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.105 [2024-11-20 07:19:09.850694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.105 [2024-11-20 07:19:09.850700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.105 [2024-11-20 07:19:09.850708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.105 [2024-11-20 07:19:09.850715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.105 [2024-11-20 07:19:09.850723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.105 [2024-11-20 07:19:09.850730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.105 [2024-11-20 07:19:09.850738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.105 [2024-11-20 07:19:09.850745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.105 [2024-11-20 07:19:09.850753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.105 [2024-11-20 07:19:09.850759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.105 [2024-11-20 07:19:09.850769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.105 [2024-11-20 07:19:09.850776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.105 [2024-11-20 07:19:09.850784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.105 [2024-11-20 07:19:09.850791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.105 [2024-11-20 07:19:09.850799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.105 [2024-11-20 07:19:09.850805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.105 [2024-11-20 07:19:09.850813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.105 [2024-11-20 07:19:09.850820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.105 [2024-11-20 07:19:09.850835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.105 [2024-11-20 07:19:09.850842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.105 [2024-11-20 07:19:09.850852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.105 [2024-11-20 07:19:09.850859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.105 [2024-11-20 07:19:09.850869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.105 [2024-11-20 07:19:09.850878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.105 [2024-11-20 07:19:09.850887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.105 [2024-11-20 07:19:09.850894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.105 [2024-11-20 07:19:09.850902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.105 [2024-11-20 07:19:09.850908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.105 [2024-11-20 07:19:09.850917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.105 [2024-11-20 07:19:09.850926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.105 [2024-11-20 07:19:09.850935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.105 [2024-11-20 07:19:09.850942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.105 [2024-11-20 07:19:09.850956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.105 [2024-11-20 07:19:09.850964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.105 [2024-11-20 07:19:09.850973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.105 [2024-11-20 07:19:09.850987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.105 [2024-11-20 07:19:09.850996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.105 [2024-11-20 07:19:09.851004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.105 [2024-11-20 07:19:09.851012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.106 [2024-11-20 07:19:09.851019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.106 [2024-11-20 07:19:09.851028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.106 [2024-11-20 07:19:09.851035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.106 [2024-11-20 07:19:09.851043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.106 [2024-11-20 07:19:09.851050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.106 [2024-11-20 07:19:09.851059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.106 [2024-11-20 07:19:09.851066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.106 [2024-11-20 07:19:09.851075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.106 [2024-11-20 07:19:09.851082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.106 [2024-11-20 07:19:09.851090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.106 [2024-11-20 07:19:09.851096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.106 [2024-11-20 07:19:09.851106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.106 [2024-11-20 07:19:09.851113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.106 [2024-11-20 07:19:09.851121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.106 [2024-11-20 07:19:09.851127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.106 [2024-11-20 07:19:09.851135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.106 [2024-11-20 07:19:09.851141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.106 [2024-11-20 07:19:09.851149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.106 [2024-11-20 07:19:09.851156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.106 [2024-11-20 07:19:09.851165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.106 [2024-11-20 07:19:09.851171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.106 [2024-11-20 07:19:09.851180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.106 [2024-11-20 07:19:09.851188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.106 [2024-11-20 07:19:09.851196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.106 [2024-11-20 07:19:09.851202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.106 [2024-11-20 07:19:09.851210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.106 [2024-11-20 07:19:09.851217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.106 [2024-11-20 07:19:09.851226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.106 [2024-11-20 07:19:09.851233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.106 [2024-11-20 07:19:09.851242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.106 [2024-11-20 07:19:09.851248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.106 [2024-11-20 07:19:09.851256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.106 [2024-11-20 07:19:09.851262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.106 [2024-11-20 07:19:09.851270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.106 [2024-11-20 07:19:09.851278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.106 [2024-11-20 07:19:09.851286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.106 [2024-11-20 07:19:09.851294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.106 [2024-11-20 07:19:09.851302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.106 [2024-11-20 07:19:09.851308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.106 [2024-11-20 07:19:09.851316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.106 [2024-11-20 07:19:09.851322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.106 [2024-11-20 07:19:09.851330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.106 [2024-11-20 07:19:09.851337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.106 [2024-11-20 07:19:09.851345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.106 [2024-11-20 07:19:09.851352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.106 [2024-11-20 07:19:09.851360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.106 [2024-11-20 07:19:09.851366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.106 [2024-11-20 07:19:09.851376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.106 [2024-11-20 07:19:09.851382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.106 [2024-11-20 07:19:09.851391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.106 [2024-11-20 07:19:09.851397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.106 [2024-11-20 07:19:09.851405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.106 [2024-11-20 07:19:09.851412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.106 [2024-11-20 07:19:09.851420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.106 [2024-11-20 07:19:09.851427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.107 [2024-11-20 07:19:09.851435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.107 [2024-11-20 07:19:09.851442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.107 [2024-11-20 07:19:09.851451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.107 [2024-11-20 07:19:09.851457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.107 [2024-11-20 07:19:09.851466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.107 [2024-11-20 07:19:09.851472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.107 [2024-11-20 07:19:09.851480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.107 [2024-11-20 07:19:09.851487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.107 [2024-11-20 07:19:09.851494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.107 [2024-11-20 07:19:09.851502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.107 [2024-11-20 07:19:09.851510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.107 [2024-11-20 07:19:09.851518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.107 [2024-11-20 07:19:09.851526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.107 [2024-11-20 07:19:09.851533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.107 [2024-11-20 07:19:09.851541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.107 [2024-11-20 07:19:09.851548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.107 [2024-11-20 07:19:09.851555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.107 [2024-11-20 07:19:09.851564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.107 [2024-11-20 07:19:09.851572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.107 [2024-11-20 07:19:09.851579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.107 [2024-11-20 07:19:09.851587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.107 [2024-11-20 07:19:09.851593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.107 [2024-11-20 07:19:09.851601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.107 [2024-11-20 07:19:09.851608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.107 [2024-11-20 07:19:09.851616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.107 [2024-11-20 07:19:09.851623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.107 [2024-11-20 07:19:09.851631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.107 [2024-11-20 07:19:09.851637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.107 [2024-11-20 07:19:09.851646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.107 [2024-11-20 07:19:09.851652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.107 [2024-11-20 07:19:09.851660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.107 [2024-11-20 07:19:09.851666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.107 [2024-11-20 07:19:09.851675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.107 [2024-11-20 07:19:09.851682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.107 [2024-11-20 07:19:09.851691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.107 [2024-11-20 07:19:09.851698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.107 [2024-11-20 07:19:09.851706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.107 [2024-11-20 07:19:09.851712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.107 [2024-11-20 07:19:09.851720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.107 [2024-11-20 07:19:09.851727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.107 [2024-11-20 07:19:09.851735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.107 [2024-11-20 07:19:09.851742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.107 [2024-11-20 07:19:09.851751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.107 [2024-11-20 07:19:09.851758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.107 [2024-11-20 07:19:09.851765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.107 [2024-11-20 07:19:09.851772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.107 [2024-11-20 07:19:09.851780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.107 [2024-11-20 07:19:09.851787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.107 [2024-11-20 07:19:09.851795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.107 [2024-11-20 07:19:09.851803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.107 [2024-11-20 07:19:09.851811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.107 [2024-11-20 07:19:09.851817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.107 [2024-11-20 07:19:09.851825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.107 [2024-11-20 07:19:09.851832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.107 [2024-11-20 07:19:09.851841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.107 [2024-11-20 07:19:09.851848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.107 [2024-11-20 07:19:09.851856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.107 [2024-11-20 07:19:09.851862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.107 [2024-11-20 07:19:09.851870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.107 [2024-11-20 07:19:09.851877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.108 [2024-11-20 07:19:09.851884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.108 [2024-11-20 07:19:09.851892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.108 [2024-11-20 07:19:09.851900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.108 [2024-11-20 07:19:09.851906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.108 [2024-11-20 07:19:09.851921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.108 [2024-11-20 07:19:09.851928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.108 [2024-11-20 07:19:09.851936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.108 [2024-11-20 07:19:09.851944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.108 [2024-11-20 07:19:09.851958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.108 [2024-11-20 07:19:09.851966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.108 [2024-11-20 07:19:09.851974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.108 [2024-11-20 07:19:09.851981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.108 [2024-11-20 07:19:09.851989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.108 [2024-11-20 07:19:09.851995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.108 [2024-11-20 07:19:09.852004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.108 [2024-11-20 07:19:09.852011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.108 [2024-11-20 07:19:09.852019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.108 [2024-11-20 07:19:09.852025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.108 [2024-11-20 07:19:09.852034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.108 [2024-11-20 07:19:09.852040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.108 [2024-11-20 07:19:09.852048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.108 [2024-11-20 07:19:09.852055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.108 [2024-11-20 07:19:09.852063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.108 [2024-11-20 07:19:09.852069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.108 [2024-11-20 07:19:09.852077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.108 [2024-11-20 07:19:09.852084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.108 [2024-11-20 07:19:09.852093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.108 [2024-11-20 07:19:09.852099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.108 [2024-11-20 07:19:09.852107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.108 [2024-11-20 07:19:09.852113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.108 [2024-11-20 07:19:09.852121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.108 [2024-11-20 07:19:09.852128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.108 [2024-11-20 07:19:09.852136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.108 [2024-11-20 07:19:09.852145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.108 [2024-11-20 07:19:09.852153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.108 [2024-11-20 07:19:09.852160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.108 [2024-11-20 07:19:09.852169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.108 [2024-11-20 07:19:09.852177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.108 [2024-11-20 07:19:09.852186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2130710 is same with the state(6) to be set 00:23:20.108 [2024-11-20 07:19:09.852194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.108 [2024-11-20 07:19:09.852200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.108 [2024-11-20 07:19:09.852206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98936 len:8 PRP1 0x0 PRP2 0x0 00:23:20.108 [2024-11-20 07:19:09.852212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.108 [2024-11-20 07:19:09.852261] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:20.108 [2024-11-20 07:19:09.852286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.108 [2024-11-20 07:19:09.852294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.108 [2024-11-20 07:19:09.852301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.108 [2024-11-20 07:19:09.852308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.108 [2024-11-20 07:19:09.852316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.108 [2024-11-20 07:19:09.852322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.108 [2024-11-20 07:19:09.852329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.108 [2024-11-20 07:19:09.852336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.108 [2024-11-20 07:19:09.852342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:20.108 [2024-11-20 07:19:09.855217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:20.108 [2024-11-20 07:19:09.855245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x210c340 (9): Bad file descriptor 00:23:20.108 [2024-11-20 07:19:09.922409] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:20.108 10725.50 IOPS, 41.90 MiB/s [2024-11-20T06:19:24.664Z] 10881.67 IOPS, 42.51 MiB/s [2024-11-20T06:19:24.664Z] 10944.75 IOPS, 42.75 MiB/s [2024-11-20T06:19:24.664Z] [2024-11-20 07:19:13.483901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.108 [2024-11-20 07:19:13.483945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.108 [2024-11-20 07:19:13.483960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.108 [2024-11-20 07:19:13.483973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.108 [2024-11-20 07:19:13.483981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.109 [2024-11-20 07:19:13.483987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.109 [2024-11-20 07:19:13.483996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.109 [2024-11-20 07:19:13.484003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.109 [2024-11-20 07:19:13.484009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210c340 is same with the state(6) to be set 00:23:20.109 [2024-11-20 07:19:13.484732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:48544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.109 [2024-11-20 07:19:13.484752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.109 [2024-11-20 07:19:13.484765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:48944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.109 [2024-11-20 07:19:13.484774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.109 [2024-11-20 07:19:13.484783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:48952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.109 [2024-11-20 07:19:13.484791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.109 [2024-11-20 07:19:13.484800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:48960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.109 [2024-11-20 07:19:13.484806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.109 [2024-11-20 07:19:13.484815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:48968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.109 [2024-11-20 07:19:13.484822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.109 [2024-11-20 07:19:13.484831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:48976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.109 [2024-11-20 07:19:13.484838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.109 [2024-11-20 07:19:13.484847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:48984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.109 [2024-11-20 07:19:13.484855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.109 [2024-11-20 07:19:13.484863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:48992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.109 [2024-11-20 07:19:13.484870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.109 [2024-11-20 07:19:13.484878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:49000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.109 [2024-11-20 07:19:13.484885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.109 [2024-11-20 07:19:13.484894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:49008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.109 [2024-11-20 07:19:13.484904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.109 [2024-11-20 07:19:13.484913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:49016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.109 [2024-11-20 07:19:13.484920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.109 [2024-11-20 07:19:13.484928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:49024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.109 [2024-11-20 07:19:13.484935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.109 [2024-11-20 07:19:13.484943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:49032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.109 [2024-11-20 07:19:13.484957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.109 [2024-11-20 07:19:13.484965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.109 [2024-11-20 07:19:13.484972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.109 [2024-11-20 07:19:13.484981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.109 [2024-11-20 07:19:13.484987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.109 [2024-11-20 07:19:13.484996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.109 [2024-11-20 07:19:13.485004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.109 [2024-11-20 07:19:13.485012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:49064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.109 [2024-11-20 07:19:13.485020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.109 [2024-11-20 07:19:13.485029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:49072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.109 [2024-11-20 07:19:13.485035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.109 [2024-11-20 07:19:13.485044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:49080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.109 [2024-11-20 07:19:13.485051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.109 [2024-11-20 07:19:13.485059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:49088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.109 [2024-11-20 07:19:13.485066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.109 [2024-11-20 07:19:13.485073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:49096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.109 [2024-11-20 07:19:13.485080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.109 [2024-11-20 07:19:13.485088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.109 [2024-11-20 07:19:13.485095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.109 [2024-11-20 07:19:13.485105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:49112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.109 [2024-11-20 07:19:13.485112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.109 [2024-11-20 07:19:13.485121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:49120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.109 [2024-11-20 07:19:13.485127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.109 [2024-11-20 07:19:13.485135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:49128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.109 [2024-11-20 07:19:13.485142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.110 [2024-11-20 07:19:13.485150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.110 [2024-11-20 07:19:13.485157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.110 [2024-11-20 07:19:13.485165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:49144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.110 [2024-11-20 07:19:13.485172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.110 [2024-11-20 07:19:13.485180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:49152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.110 [2024-11-20 07:19:13.485186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.110 [2024-11-20 07:19:13.485194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:49160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.110 [2024-11-20 07:19:13.485200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.110 [2024-11-20 07:19:13.485209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:49168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.110 [2024-11-20 07:19:13.485216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.110 [2024-11-20 07:19:13.485224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:49176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.110 [2024-11-20 07:19:13.485231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.110 [2024-11-20 07:19:13.485239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.110 [2024-11-20 07:19:13.485245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.110 [2024-11-20 07:19:13.485254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:48552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.110 [2024-11-20 07:19:13.485260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.110 [2024-11-20 07:19:13.485269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:48560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.110 [2024-11-20 07:19:13.485276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.110 [2024-11-20 07:19:13.485284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:48568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.110 [2024-11-20 07:19:13.485291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.110 [2024-11-20 07:19:13.485301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:48576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.110 [2024-11-20 07:19:13.485308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.110 [2024-11-20 07:19:13.485316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:48584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.110 [2024-11-20 07:19:13.485323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.110 [2024-11-20 07:19:13.485332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:48592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.110 [2024-11-20 07:19:13.485339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.110 [2024-11-20 07:19:13.485347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:48600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.110 [2024-11-20 07:19:13.485354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.110 [2024-11-20 07:19:13.485363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:48608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.110 [2024-11-20 07:19:13.485369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.110 [2024-11-20 07:19:13.485377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:48616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.110 [2024-11-20 07:19:13.485384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.110 [2024-11-20 07:19:13.485393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:48624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.110 [2024-11-20 07:19:13.485400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.110 [2024-11-20 07:19:13.485408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:48632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.110 [2024-11-20 07:19:13.485414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.110 [2024-11-20 07:19:13.485422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:48640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.110 [2024-11-20 07:19:13.485429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.110 [2024-11-20 07:19:13.485436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.110 [2024-11-20 07:19:13.485444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.110 [2024-11-20 07:19:13.485451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:48656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.110 [2024-11-20 07:19:13.485458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.110 [2024-11-20 07:19:13.485466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:48664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.110 [2024-11-20 07:19:13.485473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.110 [2024-11-20 07:19:13.485481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:49192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.110 [2024-11-20 07:19:13.485489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.110 [2024-11-20 07:19:13.485498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.110 [2024-11-20 07:19:13.485505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.110 [2024-11-20 07:19:13.485513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:49208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.110 [2024-11-20 07:19:13.485520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.110 [2024-11-20 07:19:13.485528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.110 [2024-11-20 07:19:13.485534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.110 [2024-11-20 07:19:13.485542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:49224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.110 [2024-11-20 07:19:13.485548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.110 [2024-11-20 07:19:13.485557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:49232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.110 [2024-11-20 07:19:13.485563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.110 [2024-11-20 07:19:13.485572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:49240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.110 [2024-11-20 07:19:13.485578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.110 [2024-11-20 07:19:13.485586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:49248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.110 [2024-11-20 07:19:13.485593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.110 [2024-11-20 07:19:13.485601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:49256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.110 [2024-11-20 07:19:13.485608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.111 [2024-11-20 07:19:13.485617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:49264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.111 [2024-11-20 07:19:13.485624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.111 [2024-11-20 07:19:13.485632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:49272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.111 [2024-11-20 07:19:13.485638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.111 [2024-11-20 07:19:13.485646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:49280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.111 [2024-11-20 07:19:13.485653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.111 [2024-11-20 07:19:13.485661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:49288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.111 [2024-11-20 07:19:13.485668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.111 [2024-11-20 07:19:13.485678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.111 [2024-11-20 07:19:13.485684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.111 [2024-11-20 07:19:13.485692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:49304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.111 [2024-11-20 07:19:13.485699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.111 [2024-11-20 07:19:13.485707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:49312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.111 [2024-11-20 07:19:13.485713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.111 [2024-11-20 07:19:13.485722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.111 [2024-11-20 07:19:13.485729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.111 [2024-11-20 07:19:13.485737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.111 [2024-11-20 07:19:13.485744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.111 [2024-11-20 07:19:13.485752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:49336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.111 [2024-11-20 07:19:13.485758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.111 [2024-11-20 07:19:13.485767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:49344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.111 [2024-11-20 07:19:13.485774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.111 [2024-11-20 07:19:13.485783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:49352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.111 [2024-11-20 07:19:13.485789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.111 [2024-11-20 07:19:13.485797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:49360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.111 [2024-11-20 07:19:13.485803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.111 [2024-11-20 07:19:13.485811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:49368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.111 [2024-11-20 07:19:13.485817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.111 [2024-11-20 07:19:13.485826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:49376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.111 [2024-11-20 07:19:13.485832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.111 [2024-11-20 07:19:13.485841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:49384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.111 [2024-11-20 07:19:13.485847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.111 [2024-11-20 07:19:13.485855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:49392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.111 [2024-11-20 07:19:13.485863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.111 [2024-11-20 07:19:13.485871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:49400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.111 [2024-11-20 07:19:13.485878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.111 [2024-11-20 07:19:13.485886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:49408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.111 [2024-11-20 07:19:13.485892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.111 [2024-11-20 07:19:13.485900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:49416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.111 [2024-11-20 07:19:13.485907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.111 [2024-11-20 07:19:13.485914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:49424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.111 [2024-11-20 07:19:13.485921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.111 [2024-11-20 07:19:13.485929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:49432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.111 [2024-11-20 07:19:13.485936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.111 [2024-11-20 07:19:13.485945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:49440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.111 [2024-11-20 07:19:13.485957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.111 [2024-11-20 07:19:13.485966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:48672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.111 [2024-11-20 07:19:13.485974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.111 [2024-11-20 07:19:13.485982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:48680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.111 [2024-11-20 07:19:13.485989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.111 [2024-11-20 07:19:13.485997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:48688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.111 [2024-11-20 07:19:13.486004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.111 [2024-11-20 07:19:13.486013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:48696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.111 [2024-11-20 07:19:13.486020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.111 [2024-11-20 07:19:13.486028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:48704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.111 [2024-11-20 07:19:13.486035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.111 [2024-11-20 07:19:13.486043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:48712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.111 [2024-11-20 07:19:13.486050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.111 [2024-11-20 07:19:13.486058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:48720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.111 [2024-11-20 07:19:13.486067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.111 [2024-11-20 07:19:13.486075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:48728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.111 [2024-11-20 07:19:13.486082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.111 [2024-11-20 07:19:13.486090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:48736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.112 [2024-11-20 07:19:13.486097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.112 [2024-11-20 07:19:13.486105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:48744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.112 [2024-11-20 07:19:13.486112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.112 [2024-11-20 07:19:13.486121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:48752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.112 [2024-11-20 07:19:13.486127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.112 [2024-11-20 07:19:13.486136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:48760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.112 [2024-11-20 07:19:13.486142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.112 [2024-11-20 07:19:13.486150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:48768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.112 [2024-11-20 07:19:13.486156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.112 [2024-11-20 07:19:13.486164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:48776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.112 [2024-11-20 07:19:13.486172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.112 [2024-11-20 07:19:13.486180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:48784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.112 [2024-11-20 07:19:13.486188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.112 [2024-11-20 07:19:13.486196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:48792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.112 [2024-11-20 07:19:13.486203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.112 [2024-11-20 07:19:13.486210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:48800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.112 [2024-11-20 07:19:13.486220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.112 [2024-11-20 07:19:13.486228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:48808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.112 [2024-11-20 07:19:13.486235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.112 [2024-11-20 07:19:13.486243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:48816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.112 [2024-11-20 07:19:13.486250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.112 [2024-11-20 07:19:13.486259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:48824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.112 [2024-11-20 07:19:13.486266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.112 [2024-11-20 07:19:13.486274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:48832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.112 [2024-11-20 07:19:13.486280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.112 [2024-11-20 07:19:13.486289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:48840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.112 [2024-11-20 07:19:13.486296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.112 [2024-11-20 07:19:13.486305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:48848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.112 [2024-11-20 07:19:13.486311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.112 [2024-11-20 07:19:13.486319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:48856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.112 [2024-11-20 07:19:13.486326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.112 [2024-11-20 07:19:13.486334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:48864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.112 [2024-11-20 07:19:13.486340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.112 [2024-11-20 07:19:13.486349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:48872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.112 [2024-11-20 07:19:13.486356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.112 [2024-11-20 07:19:13.486364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:48880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.112 [2024-11-20 07:19:13.486370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.112 [2024-11-20 07:19:13.486378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:48888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.112 [2024-11-20 07:19:13.486385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.112 [2024-11-20 07:19:13.486392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:48896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.112 [2024-11-20 07:19:13.486399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.112 [2024-11-20 07:19:13.486407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:48904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.112 [2024-11-20 07:19:13.486414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.112 [2024-11-20 07:19:13.486423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:48912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.112 [2024-11-20 07:19:13.486429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.112 [2024-11-20 07:19:13.486437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:48920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.112 [2024-11-20 07:19:13.486445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.112 [2024-11-20 07:19:13.486453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:48928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.112 [2024-11-20 07:19:13.486462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.112 [2024-11-20 07:19:13.486471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:48936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.112 [2024-11-20 07:19:13.486477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.112 [2024-11-20 07:19:13.486485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:49448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.112 [2024-11-20 07:19:13.486492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.112 [2024-11-20 07:19:13.486499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:49456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.112 [2024-11-20 07:19:13.486506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.112 [2024-11-20 07:19:13.486515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.112 [2024-11-20 07:19:13.486521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.112 [2024-11-20 07:19:13.486529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.112 [2024-11-20 07:19:13.486535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.112 [2024-11-20 07:19:13.486543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:49480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.112 [2024-11-20 07:19:13.486549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.112 [2024-11-20 07:19:13.486557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.112 [2024-11-20 07:19:13.486564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.112 [2024-11-20 07:19:13.486572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:49496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.113 [2024-11-20 07:19:13.486579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.113 [2024-11-20 07:19:13.486587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:49504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.113 [2024-11-20 07:19:13.486593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.113 [2024-11-20 07:19:13.486601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:49512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.113 [2024-11-20 07:19:13.486607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.113 [2024-11-20 07:19:13.486616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.113 [2024-11-20 07:19:13.486624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.113 [2024-11-20 07:19:13.486636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.113 [2024-11-20 07:19:13.486643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.113 [2024-11-20 07:19:13.486651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:49536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.113 [2024-11-20 07:19:13.486658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.113 [2024-11-20 07:19:13.486665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:49544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.113 [2024-11-20 07:19:13.486672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.113 [2024-11-20 07:19:13.486681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:49552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.113 [2024-11-20 07:19:13.486687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.113 [2024-11-20 07:19:13.486709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.113 [2024-11-20 07:19:13.486716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.113 [2024-11-20 07:19:13.486722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49560 len:8 PRP1 0x0 PRP2 0x0 00:23:20.113 [2024-11-20 07:19:13.486731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.113 [2024-11-20 07:19:13.486774] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:20.113 [2024-11-20 07:19:13.486783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:20.113 [2024-11-20 07:19:13.489630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:20.113 [2024-11-20 07:19:13.489660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x210c340 (9): Bad file descriptor 00:23:20.113 [2024-11-20 07:19:13.515226] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:23:20.113 10907.80 IOPS, 42.61 MiB/s [2024-11-20T06:19:24.669Z] 10933.33 IOPS, 42.71 MiB/s [2024-11-20T06:19:24.669Z] 10976.86 IOPS, 42.88 MiB/s [2024-11-20T06:19:24.669Z] 10979.25 IOPS, 42.89 MiB/s [2024-11-20T06:19:24.669Z] 11005.67 IOPS, 42.99 MiB/s [2024-11-20T06:19:24.669Z] [2024-11-20 07:19:17.919146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.113 [2024-11-20 07:19:17.919180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.113 [2024-11-20 07:19:17.919189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.113 [2024-11-20 07:19:17.919197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.113 [2024-11-20 07:19:17.919204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.113 [2024-11-20 07:19:17.919211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.113 [2024-11-20 07:19:17.919218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.113 [2024-11-20 07:19:17.919225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.113 [2024-11-20 07:19:17.919233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210c340 is same with the state(6) to be set 00:23:20.113 [2024-11-20 07:19:17.920220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:56712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.113 [2024-11-20 07:19:17.920238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.113 [2024-11-20 07:19:17.920251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:56720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.113 [2024-11-20 07:19:17.920259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.113 [2024-11-20 07:19:17.920267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:56728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.113 [2024-11-20 07:19:17.920274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.113 [2024-11-20 07:19:17.920282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:56736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.113 [2024-11-20 07:19:17.920289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.113 [2024-11-20 07:19:17.920298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:56744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.113 [2024-11-20 07:19:17.920305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.113 [2024-11-20 07:19:17.920313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:56752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.113 [2024-11-20 07:19:17.920320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.113 [2024-11-20 07:19:17.920328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:56760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.113 [2024-11-20 07:19:17.920334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.113 [2024-11-20 07:19:17.920342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:56768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.113 [2024-11-20 07:19:17.920350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.113 [2024-11-20 07:19:17.920358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:56776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.113 [2024-11-20 07:19:17.920365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.113 [2024-11-20 07:19:17.920374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:56784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.113 [2024-11-20 07:19:17.920380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.113 [2024-11-20 07:19:17.920389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:56792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.113 [2024-11-20 07:19:17.920395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.113 [2024-11-20 07:19:17.920404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:56800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.113 [2024-11-20 07:19:17.920411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.113 [2024-11-20 07:19:17.920422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:56808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.113 [2024-11-20 07:19:17.920432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.113 [2024-11-20 07:19:17.920440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:56816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.113 [2024-11-20 07:19:17.920446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.113 [2024-11-20 07:19:17.920455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:56824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.113 [2024-11-20 07:19:17.920462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.113 [2024-11-20 07:19:17.920470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:56832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.114 [2024-11-20 07:19:17.920477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.114 [2024-11-20 07:19:17.920485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:56840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.114 [2024-11-20 07:19:17.920491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.114 [2024-11-20 07:19:17.920499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:56848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.114 [2024-11-20 07:19:17.920506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.114 [2024-11-20 07:19:17.920514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:56856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.114 [2024-11-20 07:19:17.920520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.114 [2024-11-20 07:19:17.920529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:56864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.114 [2024-11-20 07:19:17.920535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.114 [2024-11-20 07:19:17.920545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:56872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.114 [2024-11-20 07:19:17.920551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.114 [2024-11-20 07:19:17.920559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:56880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.114 [2024-11-20 07:19:17.920566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.114 [2024-11-20 07:19:17.920574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:56888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.114 [2024-11-20 07:19:17.920581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.114 [2024-11-20 07:19:17.920589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:56896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.114 [2024-11-20 07:19:17.920595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.114 [2024-11-20 07:19:17.920603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:56904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.114 [2024-11-20 07:19:17.920611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.114 [2024-11-20 07:19:17.920622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:56912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.114 [2024-11-20 07:19:17.920628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.114 [2024-11-20 07:19:17.920637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:56920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.114 [2024-11-20 07:19:17.920644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.114 [2024-11-20 07:19:17.920652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:56928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.114 [2024-11-20 07:19:17.920658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.114 [2024-11-20 07:19:17.920667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:56936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.114 [2024-11-20 07:19:17.920673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.114 [2024-11-20 07:19:17.920682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:56944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.114 [2024-11-20 07:19:17.920689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.114 [2024-11-20 07:19:17.920697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:56952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.114 [2024-11-20 07:19:17.920703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.114 [2024-11-20 07:19:17.920711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:56960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.114 [2024-11-20 07:19:17.920718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.114 [2024-11-20 07:19:17.920726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:56968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.114 [2024-11-20 07:19:17.920733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.114 [2024-11-20 07:19:17.920741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:56976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.114 [2024-11-20 07:19:17.920747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.114 [2024-11-20 07:19:17.920755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:56984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.114 [2024-11-20 07:19:17.920762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.114 [2024-11-20 07:19:17.920770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:56992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.114 [2024-11-20 07:19:17.920777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.114 [2024-11-20 07:19:17.920785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:57000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.114 [2024-11-20 07:19:17.920792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.114 [2024-11-20 07:19:17.920799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:57008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.114 [2024-11-20 07:19:17.920807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.114 [2024-11-20 07:19:17.920815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:57016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.114 [2024-11-20 07:19:17.920822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.114 [2024-11-20 07:19:17.920831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.114 [2024-11-20 07:19:17.920838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.114 [2024-11-20 07:19:17.920846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:57032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.114 [2024-11-20 07:19:17.920853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.114 [2024-11-20 07:19:17.920861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:57040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.114 [2024-11-20 07:19:17.920868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.114 [2024-11-20 07:19:17.920876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:57168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.114 [2024-11-20 07:19:17.920883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.114 [2024-11-20 07:19:17.920892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:57176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.114 [2024-11-20 07:19:17.920898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.114 [2024-11-20 07:19:17.920907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:57184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.114 [2024-11-20 07:19:17.920914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.114 [2024-11-20 07:19:17.920922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:57192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.114 [2024-11-20 07:19:17.920929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.114 [2024-11-20 07:19:17.920938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:57200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.114 [2024-11-20 07:19:17.920944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.114 [2024-11-20 07:19:17.920959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:57208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.115 [2024-11-20 07:19:17.920967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.115 [2024-11-20 07:19:17.920975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:57216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.115 [2024-11-20 07:19:17.920981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.115 [2024-11-20 07:19:17.920990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:57048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.115 [2024-11-20 07:19:17.920997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.115 [2024-11-20 07:19:17.921006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:57056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.115 [2024-11-20 07:19:17.921014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.115 [2024-11-20 07:19:17.921022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:57064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.115 [2024-11-20 07:19:17.921029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.115 [2024-11-20 07:19:17.921038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:57072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.115 [2024-11-20 07:19:17.921044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.115 [2024-11-20 07:19:17.921053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:57080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.115 [2024-11-20 07:19:17.921059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.115 [2024-11-20 07:19:17.921067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:57088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.115 [2024-11-20 07:19:17.921074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.115 [2024-11-20 07:19:17.921082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:57096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.115 [2024-11-20 07:19:17.921089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.115 [2024-11-20 07:19:17.921098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:57104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.115 [2024-11-20 07:19:17.921105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.115 [2024-11-20 07:19:17.921113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:57112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.115 [2024-11-20 07:19:17.921120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.115 [2024-11-20 07:19:17.921128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:57120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.115 [2024-11-20 07:19:17.921135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.115 [2024-11-20 07:19:17.921143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:57128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.115 [2024-11-20 07:19:17.921149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.115 [2024-11-20 07:19:17.921158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:57136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.115 [2024-11-20 07:19:17.921165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.115 [2024-11-20 07:19:17.921173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:57144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.115 [2024-11-20 07:19:17.921180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.115 [2024-11-20 07:19:17.921188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:57152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.115 [2024-11-20 07:19:17.921195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.115 [2024-11-20 07:19:17.921205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:57160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.115 [2024-11-20 07:19:17.921212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.115 [2024-11-20 07:19:17.921221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:57224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.115 [2024-11-20 07:19:17.921227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.115 [2024-11-20 07:19:17.921236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:57232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.115 [2024-11-20 07:19:17.921242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.115 [2024-11-20 07:19:17.921250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:57240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.115 [2024-11-20 07:19:17.921256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.115 [2024-11-20 07:19:17.921264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:57248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.115 [2024-11-20 07:19:17.921273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.115 [2024-11-20 07:19:17.921283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:57256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.115 [2024-11-20 07:19:17.921290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.115 [2024-11-20 07:19:17.921299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:57264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.115 [2024-11-20 07:19:17.921305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.115 [2024-11-20 07:19:17.921313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:57272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.115 [2024-11-20 07:19:17.921320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.115 [2024-11-20 07:19:17.921329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:57280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.115 [2024-11-20 07:19:17.921336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.115 [2024-11-20 07:19:17.921344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:57288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.115 [2024-11-20 07:19:17.921350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.115 [2024-11-20 07:19:17.921359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:57296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.115 [2024-11-20 07:19:17.921366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.115 [2024-11-20 07:19:17.921376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:57304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.115 [2024-11-20 07:19:17.921383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.115 [2024-11-20 07:19:17.921391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:57312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.116 [2024-11-20 07:19:17.921400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.116 [2024-11-20 07:19:17.921408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:57320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.116 [2024-11-20 07:19:17.921414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.116 [2024-11-20 07:19:17.921422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:57328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.116 [2024-11-20 07:19:17.921428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.116 [2024-11-20 07:19:17.921437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:57336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.116 [2024-11-20 07:19:17.921444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.116 [2024-11-20 07:19:17.921452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:57344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.116 [2024-11-20 07:19:17.921459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.116 [2024-11-20 07:19:17.921467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:57352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.116 [2024-11-20 07:19:17.921473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.116 [2024-11-20 07:19:17.921481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:57360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.116 [2024-11-20 07:19:17.921489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.116 [2024-11-20 07:19:17.921501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:57368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.116 [2024-11-20 07:19:17.921509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.116 [2024-11-20 07:19:17.921517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:57376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.116 [2024-11-20 07:19:17.921523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.116 [2024-11-20 07:19:17.921531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:57384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.116 [2024-11-20 07:19:17.921538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.116 [2024-11-20 07:19:17.921547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:57392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.116 [2024-11-20 07:19:17.921554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.116 [2024-11-20 07:19:17.921562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:57400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.116 [2024-11-20 07:19:17.921569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.116 [2024-11-20 07:19:17.921577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:57408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.116 [2024-11-20 07:19:17.921584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.116 [2024-11-20 07:19:17.921594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.116 [2024-11-20 07:19:17.921602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.116 [2024-11-20 07:19:17.921610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:57424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.116 [2024-11-20 07:19:17.921617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.116 [2024-11-20 07:19:17.921626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:57432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.116 [2024-11-20 07:19:17.921632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.116 [2024-11-20 07:19:17.921640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:57440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.116 [2024-11-20 07:19:17.921648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.116 [2024-11-20 07:19:17.921656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:57448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.116 [2024-11-20 07:19:17.921663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.116 [2024-11-20 07:19:17.921671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:57456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.116 [2024-11-20 07:19:17.921678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.116 [2024-11-20 07:19:17.921686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:57464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.116 [2024-11-20 07:19:17.921692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.116 [2024-11-20 07:19:17.921701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:57472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.116 [2024-11-20 07:19:17.921708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.116 [2024-11-20 07:19:17.921717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:57480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.116 [2024-11-20 07:19:17.921723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.116 [2024-11-20 07:19:17.921731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:57488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.116 [2024-11-20 07:19:17.921738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.116 [2024-11-20 07:19:17.921745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:57496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.116 [2024-11-20 07:19:17.921754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.116 [2024-11-20 07:19:17.921762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:57504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.116 [2024-11-20 07:19:17.921769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.116 [2024-11-20 07:19:17.921777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:57512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.116 [2024-11-20 07:19:17.921784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.116 [2024-11-20 07:19:17.921793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:57520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.116 [2024-11-20 07:19:17.921800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.116 [2024-11-20 07:19:17.921808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:57528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.116 [2024-11-20 07:19:17.921814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.116 [2024-11-20 07:19:17.921823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:57536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.116 [2024-11-20 07:19:17.921829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.116 [2024-11-20 07:19:17.921837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.116 [2024-11-20 07:19:17.921844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.116 [2024-11-20 07:19:17.921852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:57552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.116 [2024-11-20 07:19:17.921859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.116 [2024-11-20 07:19:17.921867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:57560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.116 [2024-11-20 07:19:17.921875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.116 [2024-11-20 07:19:17.921883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:57568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.117 [2024-11-20 07:19:17.921890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.117 [2024-11-20 07:19:17.921897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:57576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.117 [2024-11-20 07:19:17.921904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.117 [2024-11-20 07:19:17.921913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:57584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.117 [2024-11-20 07:19:17.921919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.117 [2024-11-20 07:19:17.921928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:57592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.117 [2024-11-20 07:19:17.921934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.117 [2024-11-20 07:19:17.921942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:57600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.117 [2024-11-20 07:19:17.921953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.117 [2024-11-20 07:19:17.921961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:57608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.117 [2024-11-20 07:19:17.921968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.117 [2024-11-20 07:19:17.921976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:57616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.117 [2024-11-20 07:19:17.921985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.117 [2024-11-20 07:19:17.921993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:57624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.117 [2024-11-20 07:19:17.922001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.117 [2024-11-20 07:19:17.922008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:57632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.117 [2024-11-20 07:19:17.922015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.117 [2024-11-20 07:19:17.922023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:57640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.117 [2024-11-20 07:19:17.922029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.117 [2024-11-20 07:19:17.922039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:57648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.117 [2024-11-20 07:19:17.922045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.117 [2024-11-20 07:19:17.922054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:57656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.117 [2024-11-20 07:19:17.922060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.117 [2024-11-20 07:19:17.922068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:57664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.117 [2024-11-20 07:19:17.922075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.117 [2024-11-20 07:19:17.922083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:57672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.117 [2024-11-20 07:19:17.922090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.117 [2024-11-20 07:19:17.922098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:57680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.117 [2024-11-20 07:19:17.922105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.117 [2024-11-20 07:19:17.922114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:57688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.117 [2024-11-20 07:19:17.922121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.117 [2024-11-20 07:19:17.922129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:57696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.117 [2024-11-20 07:19:17.922136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.117 [2024-11-20 07:19:17.922144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:57704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.117 [2024-11-20 07:19:17.922151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.117 [2024-11-20 07:19:17.922160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:57712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.117 [2024-11-20 07:19:17.922167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.117 [2024-11-20 07:19:17.922176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:57720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.117 [2024-11-20 07:19:17.922183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.117 [2024-11-20 07:19:17.922201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.117 [2024-11-20 07:19:17.922208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.117 [2024-11-20 07:19:17.922215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57728 len:8 PRP1 0x0 PRP2 0x0 00:23:20.117 [2024-11-20 07:19:17.922221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.117 [2024-11-20 07:19:17.922265] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:20.117 [2024-11-20 07:19:17.922275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:20.117 [2024-11-20 07:19:17.925134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:20.117 [2024-11-20 07:19:17.925164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x210c340 (9): Bad file descriptor 00:23:20.117 [2024-11-20 07:19:17.952973] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:23:20.117 10987.40 IOPS, 42.92 MiB/s [2024-11-20T06:19:24.673Z] 11035.09 IOPS, 43.11 MiB/s [2024-11-20T06:19:24.673Z] 11045.58 IOPS, 43.15 MiB/s [2024-11-20T06:19:24.673Z] 11058.77 IOPS, 43.20 MiB/s [2024-11-20T06:19:24.673Z] 11061.50 IOPS, 43.21 MiB/s 00:23:20.117 Latency(us) 00:23:20.117 [2024-11-20T06:19:24.673Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.117 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:20.117 Verification LBA range: start 0x0 length 0x4000 00:23:20.117 NVMe0n1 : 15.01 11073.13 43.25 411.82 0.00 11122.59 439.87 16184.54 00:23:20.117 [2024-11-20T06:19:24.673Z] =================================================================================================================== 00:23:20.117 [2024-11-20T06:19:24.673Z] Total : 11073.13 43.25 411.82 0.00 11122.59 439.87 16184.54 00:23:20.117 Received shutdown signal, test time was about 15.000000 seconds 00:23:20.117 00:23:20.117 Latency(us) 00:23:20.117 [2024-11-20T06:19:24.673Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.117 [2024-11-20T06:19:24.673Z] =================================================================================================================== 00:23:20.117 [2024-11-20T06:19:24.673Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:20.117 07:19:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:20.117 07:19:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:20.117 07:19:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:20.117 07:19:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1290603 00:23:20.117 07:19:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:20.117 07:19:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1290603 /var/tmp/bdevperf.sock 00:23:20.117 07:19:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 1290603 ']' 00:23:20.117 07:19:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:20.117 07:19:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:20.118 07:19:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:20.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:20.118 07:19:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:20.118 07:19:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:20.118 07:19:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:20.118 07:19:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:23:20.118 07:19:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:20.118 [2024-11-20 07:19:24.474888] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:20.118 07:19:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:20.376 [2024-11-20 07:19:24.671413] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:20.376 07:19:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:20.634 NVMe0n1 00:23:20.634 07:19:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:21.201 00:23:21.201 07:19:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:21.459 00:23:21.459 07:19:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:21.459 07:19:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:21.717 07:19:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:21.717 07:19:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:24.997 07:19:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:24.997 07:19:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:24.997 07:19:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:24.997 07:19:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1291386 00:23:24.997 07:19:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1291386 00:23:26.373 { 00:23:26.373 "results": [ 00:23:26.373 { 00:23:26.373 "job": "NVMe0n1", 00:23:26.373 "core_mask": "0x1", 00:23:26.373 "workload": "verify", 00:23:26.373 "status": "finished", 00:23:26.373 "verify_range": { 00:23:26.373 "start": 0, 00:23:26.373 "length": 16384 00:23:26.373 }, 00:23:26.373 "queue_depth": 128, 00:23:26.373 "io_size": 4096, 00:23:26.373 "runtime": 1.007852, 00:23:26.373 "iops": 11096.867397197208, 00:23:26.373 "mibps": 43.34713827030159, 00:23:26.373 "io_failed": 0, 00:23:26.373 "io_timeout": 0, 00:23:26.373 "avg_latency_us": 11471.066850780619, 00:23:26.373 "min_latency_us": 1966.08, 00:23:26.373 "max_latency_us": 10200.820869565217 00:23:26.373 } 00:23:26.373 ], 00:23:26.373 "core_count": 1 00:23:26.373 } 00:23:26.373 07:19:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:26.373 [2024-11-20 07:19:24.086858] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:23:26.373 [2024-11-20 07:19:24.086911] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1290603 ] 00:23:26.373 [2024-11-20 07:19:24.166574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.374 [2024-11-20 07:19:24.204440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.374 [2024-11-20 07:19:26.205681] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:26.374 [2024-11-20 07:19:26.205727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:26.374 [2024-11-20 07:19:26.205738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.374 [2024-11-20 07:19:26.205747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:26.374 [2024-11-20 07:19:26.205754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.374 [2024-11-20 07:19:26.205762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:26.374 [2024-11-20 07:19:26.205769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.374 [2024-11-20 07:19:26.205776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:26.374 [2024-11-20 07:19:26.205783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.374 [2024-11-20 07:19:26.205790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:23:26.374 [2024-11-20 07:19:26.205814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:23:26.374 [2024-11-20 07:19:26.205828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2340 (9): Bad file descriptor 00:23:26.374 [2024-11-20 07:19:26.310109] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:23:26.374 Running I/O for 1 seconds... 00:23:26.374 11013.00 IOPS, 43.02 MiB/s 00:23:26.374 Latency(us) 00:23:26.374 [2024-11-20T06:19:30.930Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.374 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:26.374 Verification LBA range: start 0x0 length 0x4000 00:23:26.374 NVMe0n1 : 1.01 11096.87 43.35 0.00 0.00 11471.07 1966.08 10200.82 00:23:26.374 [2024-11-20T06:19:30.930Z] =================================================================================================================== 00:23:26.374 [2024-11-20T06:19:30.930Z] Total : 11096.87 43.35 0.00 0.00 11471.07 1966.08 10200.82 00:23:26.374 07:19:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:26.374 07:19:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:26.374 07:19:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:26.632 07:19:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:26.632 07:19:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:26.632 07:19:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:26.890 07:19:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:30.172 07:19:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:30.172 07:19:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:30.172 07:19:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1290603 00:23:30.172 07:19:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 1290603 ']' 00:23:30.172 07:19:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 1290603 00:23:30.172 07:19:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:23:30.173 07:19:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:30.173 07:19:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1290603 00:23:30.173 07:19:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:30.173 07:19:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:30.173 07:19:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1290603' 00:23:30.173 killing process with pid 1290603 00:23:30.173 07:19:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 1290603 00:23:30.173 07:19:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 1290603 00:23:30.431 07:19:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:30.431 07:19:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:30.689 07:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:30.689 07:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:30.689 07:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:30.689 07:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:30.689 07:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:23:30.689 07:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:30.689 07:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:23:30.689 07:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:30.689 07:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:30.689 rmmod nvme_tcp 00:23:30.689 rmmod nvme_fabrics 00:23:30.689 rmmod nvme_keyring 00:23:30.689 07:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:30.689 07:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:23:30.689 07:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:23:30.689 07:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 1287658 ']' 00:23:30.689 07:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 1287658 00:23:30.689 07:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 1287658 ']' 00:23:30.690 07:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 1287658 00:23:30.690 07:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:23:30.690 07:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:30.690 07:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1287658 00:23:30.690 07:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:30.690 07:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:30.690 07:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1287658' 00:23:30.690 killing process with pid 1287658 00:23:30.690 07:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 1287658 00:23:30.690 07:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 1287658 00:23:30.950 07:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:30.950 07:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:30.950 07:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:30.950 07:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:23:30.950 07:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:23:30.950 07:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:30.950 07:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:23:30.950 07:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:30.950 07:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:30.950 07:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.950 07:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:30.950 07:19:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:32.856 07:19:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:32.856 00:23:32.856 real 0m37.630s 00:23:32.856 user 1m59.278s 00:23:32.856 sys 0m7.995s 00:23:32.856 07:19:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:32.856 07:19:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:32.856 ************************************ 00:23:32.856 END TEST nvmf_failover 00:23:32.856 ************************************ 00:23:33.116 07:19:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:33.116 07:19:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:33.116 07:19:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:33.116 07:19:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.116 ************************************ 00:23:33.116 START TEST nvmf_host_discovery 00:23:33.116 ************************************ 00:23:33.116 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:33.116 * Looking for test storage... 00:23:33.116 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:33.116 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:33.116 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:23:33.116 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:33.116 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:33.116 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:33.116 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:33.116 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:33.116 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:23:33.116 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:23:33.116 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:23:33.116 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:23:33.116 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:23:33.116 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:23:33.116 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:23:33.116 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:33.116 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:23:33.116 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:23:33.116 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:33.116 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:33.116 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:23:33.116 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:23:33.116 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:33.116 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:23:33.116 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:23:33.116 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:23:33.116 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:23:33.116 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:33.116 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:23:33.116 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:23:33.116 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:33.116 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:33.116 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:23:33.116 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:33.116 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:33.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.116 --rc genhtml_branch_coverage=1 00:23:33.116 --rc genhtml_function_coverage=1 00:23:33.116 --rc genhtml_legend=1 00:23:33.116 --rc geninfo_all_blocks=1 00:23:33.116 --rc geninfo_unexecuted_blocks=1 00:23:33.116 00:23:33.116 ' 00:23:33.116 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:33.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.116 --rc genhtml_branch_coverage=1 00:23:33.116 --rc genhtml_function_coverage=1 00:23:33.116 --rc genhtml_legend=1 00:23:33.116 --rc geninfo_all_blocks=1 00:23:33.116 --rc geninfo_unexecuted_blocks=1 00:23:33.116 00:23:33.116 ' 00:23:33.116 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:33.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.116 --rc genhtml_branch_coverage=1 00:23:33.116 --rc genhtml_function_coverage=1 00:23:33.116 --rc genhtml_legend=1 00:23:33.116 --rc geninfo_all_blocks=1 00:23:33.116 --rc geninfo_unexecuted_blocks=1 00:23:33.116 00:23:33.116 ' 00:23:33.116 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:33.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:33.116 --rc genhtml_branch_coverage=1 00:23:33.116 --rc genhtml_function_coverage=1 00:23:33.116 --rc genhtml_legend=1 00:23:33.116 --rc geninfo_all_blocks=1 00:23:33.116 --rc geninfo_unexecuted_blocks=1 00:23:33.116 00:23:33.116 ' 00:23:33.116 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:33.116 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:33.116 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:33.117 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:33.117 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.377 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:33.377 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:33.377 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:23:33.377 07:19:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.949 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:39.949 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:23:39.949 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:39.949 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:39.949 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:39.949 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:39.949 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:39.949 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:23:39.949 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:39.949 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:23:39.949 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:23:39.949 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:23:39.949 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:23:39.949 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:23:39.949 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:23:39.949 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:39.949 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:39.949 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:39.950 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:39.950 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:39.950 Found net devices under 0000:86:00.0: cvl_0_0 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:39.950 Found net devices under 0000:86:00.1: cvl_0_1 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:39.950 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:39.950 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:23:39.950 00:23:39.950 --- 10.0.0.2 ping statistics --- 00:23:39.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.950 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:39.950 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:39.950 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:23:39.950 00:23:39.950 --- 10.0.0.1 ping statistics --- 00:23:39.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.950 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:39.950 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=1295828 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 1295828 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 1295828 ']' 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.951 [2024-11-20 07:19:43.661166] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:23:39.951 [2024-11-20 07:19:43.661210] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:39.951 [2024-11-20 07:19:43.740734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.951 [2024-11-20 07:19:43.782356] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.951 [2024-11-20 07:19:43.782392] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.951 [2024-11-20 07:19:43.782399] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.951 [2024-11-20 07:19:43.782405] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.951 [2024-11-20 07:19:43.782411] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.951 [2024-11-20 07:19:43.783012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.951 [2024-11-20 07:19:43.928398] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.951 [2024-11-20 07:19:43.940583] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.951 null0 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.951 null1 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1295935 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1295935 /tmp/host.sock 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 1295935 ']' 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:39.951 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:39.951 07:19:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.951 [2024-11-20 07:19:44.017990] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:23:39.951 [2024-11-20 07:19:44.018035] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1295935 ] 00:23:39.951 [2024-11-20 07:19:44.090675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.951 [2024-11-20 07:19:44.133889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.951 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:39.951 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:23:39.951 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:39.951 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:39.951 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.951 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.951 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.951 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:39.951 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.951 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.951 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.951 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:39.951 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:39.952 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.212 [2024-11-20 07:19:44.554157] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:40.212 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:40.213 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.213 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:40.213 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:40.213 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:40.213 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.213 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:23:40.213 07:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:23:40.799 [2024-11-20 07:19:45.306450] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:40.799 [2024-11-20 07:19:45.306469] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:40.799 [2024-11-20 07:19:45.306481] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:41.123 [2024-11-20 07:19:45.392743] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:41.123 [2024-11-20 07:19:45.447311] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:23:41.123 [2024-11-20 07:19:45.448039] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x7c6df0:1 started. 00:23:41.123 [2024-11-20 07:19:45.449438] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:41.123 [2024-11-20 07:19:45.449454] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:41.123 [2024-11-20 07:19:45.454992] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x7c6df0 was disconnected and freed. delete nvme_qpair. 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.382 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.641 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:41.641 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:41.641 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:23:41.641 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:41.641 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:41.641 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.641 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.641 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.641 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:41.641 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:41.641 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:41.641 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:41.641 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:41.641 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:23:41.641 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:41.641 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:41.641 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.641 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:41.641 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.641 07:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:41.900 [2024-11-20 07:19:46.199695] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x795620:1 started. 00:23:41.900 [2024-11-20 07:19:46.206959] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x795620 was disconnected and freed. delete nvme_qpair. 00:23:41.900 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.900 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.901 [2024-11-20 07:19:46.278777] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:41.901 [2024-11-20 07:19:46.279166] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:41.901 [2024-11-20 07:19:46.279184] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:41.901 [2024-11-20 07:19:46.366584] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:41.901 07:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:23:42.160 [2024-11-20 07:19:46.594764] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:23:42.160 [2024-11-20 07:19:46.594797] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:42.160 [2024-11-20 07:19:46.594805] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:42.160 [2024-11-20 07:19:46.594810] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:43.096 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:43.096 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:43.096 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:23:43.096 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:43.096 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:43.096 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.096 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:43.096 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.096 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:43.096 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.096 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:43.096 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:43.096 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:43.096 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:43.096 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:43.096 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:43.096 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:43.096 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:43.096 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:43.096 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:23:43.096 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:43.096 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:43.096 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.096 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.096 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.096 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:43.096 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:43.096 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:23:43.096 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:43.096 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:43.096 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.096 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.096 [2024-11-20 07:19:47.534410] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:43.096 [2024-11-20 07:19:47.534432] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:43.097 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.097 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:43.097 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:43.097 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:43.097 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:43.097 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:43.097 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:23:43.097 [2024-11-20 07:19:47.544109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.097 [2024-11-20 07:19:47.544126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:43.097 [2024-11-20 07:19:47.544135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.097 [2024-11-20 07:19:47.544142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:43.097 [2024-11-20 07:19:47.544150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.097 [2024-11-20 07:19:47.544157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:43.097 [2024-11-20 07:19:47.544165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:43.097 [2024-11-20 07:19:47.544172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:43.097 [2024-11-20 07:19:47.544179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x797390 is same with the state(6) to be set 00:23:43.097 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:43.097 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:43.097 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.097 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:43.097 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.097 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:43.097 [2024-11-20 07:19:47.554122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x797390 (9): Bad file descriptor 00:23:43.097 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.097 [2024-11-20 07:19:47.564159] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:43.097 [2024-11-20 07:19:47.564170] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:43.097 [2024-11-20 07:19:47.564174] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:43.097 [2024-11-20 07:19:47.564179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:43.097 [2024-11-20 07:19:47.564197] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:43.097 [2024-11-20 07:19:47.564363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:43.097 [2024-11-20 07:19:47.564378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x797390 with addr=10.0.0.2, port=4420 00:23:43.097 [2024-11-20 07:19:47.564388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x797390 is same with the state(6) to be set 00:23:43.097 [2024-11-20 07:19:47.564399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x797390 (9): Bad file descriptor 00:23:43.097 [2024-11-20 07:19:47.564409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:43.097 [2024-11-20 07:19:47.564416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:43.097 [2024-11-20 07:19:47.564425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:43.097 [2024-11-20 07:19:47.564432] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:43.097 [2024-11-20 07:19:47.564437] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:43.097 [2024-11-20 07:19:47.564442] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:43.097 [2024-11-20 07:19:47.574228] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:43.097 [2024-11-20 07:19:47.574239] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:43.097 [2024-11-20 07:19:47.574243] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:43.097 [2024-11-20 07:19:47.574247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:43.097 [2024-11-20 07:19:47.574260] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:43.097 [2024-11-20 07:19:47.574404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:43.097 [2024-11-20 07:19:47.574417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x797390 with addr=10.0.0.2, port=4420 00:23:43.097 [2024-11-20 07:19:47.574424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x797390 is same with the state(6) to be set 00:23:43.097 [2024-11-20 07:19:47.574435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x797390 (9): Bad file descriptor 00:23:43.097 [2024-11-20 07:19:47.574445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:43.097 [2024-11-20 07:19:47.574451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:43.097 [2024-11-20 07:19:47.574459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:43.097 [2024-11-20 07:19:47.574468] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:43.097 [2024-11-20 07:19:47.574473] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:43.097 [2024-11-20 07:19:47.574477] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:43.097 [2024-11-20 07:19:47.584291] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:43.097 [2024-11-20 07:19:47.584304] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:43.097 [2024-11-20 07:19:47.584309] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:43.097 [2024-11-20 07:19:47.584313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:43.097 [2024-11-20 07:19:47.584328] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:43.097 [2024-11-20 07:19:47.584576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:43.097 [2024-11-20 07:19:47.584589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x797390 with addr=10.0.0.2, port=4420 00:23:43.097 [2024-11-20 07:19:47.584597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x797390 is same with the state(6) to be set 00:23:43.097 [2024-11-20 07:19:47.584608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x797390 (9): Bad file descriptor 00:23:43.097 [2024-11-20 07:19:47.584631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:43.097 [2024-11-20 07:19:47.584640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:43.097 [2024-11-20 07:19:47.584648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:43.097 [2024-11-20 07:19:47.584654] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:43.097 [2024-11-20 07:19:47.584658] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:43.097 [2024-11-20 07:19:47.584662] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:43.097 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.097 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:43.097 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:43.097 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:43.097 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:43.097 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:43.098 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:43.098 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:23:43.098 [2024-11-20 07:19:47.594358] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:43.098 [2024-11-20 07:19:47.594370] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:43.098 [2024-11-20 07:19:47.594375] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:43.098 [2024-11-20 07:19:47.594379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:43.098 [2024-11-20 07:19:47.594395] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:43.098 [2024-11-20 07:19:47.594464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:43.098 [2024-11-20 07:19:47.594476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x797390 with addr=10.0.0.2, port=4420 00:23:43.098 [2024-11-20 07:19:47.594484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x797390 is same with the state(6) to be set 00:23:43.098 [2024-11-20 07:19:47.594494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x797390 (9): Bad file descriptor 00:23:43.098 [2024-11-20 07:19:47.594504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:43.098 [2024-11-20 07:19:47.594527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:43.098 [2024-11-20 07:19:47.594536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:43.098 [2024-11-20 07:19:47.594542] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:43.098 [2024-11-20 07:19:47.594546] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:43.098 [2024-11-20 07:19:47.594550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:43.098 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:43.098 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:43.098 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.098 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:43.098 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.098 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:43.098 [2024-11-20 07:19:47.604426] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:43.098 [2024-11-20 07:19:47.604440] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:43.098 [2024-11-20 07:19:47.604444] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:43.098 [2024-11-20 07:19:47.604449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:43.098 [2024-11-20 07:19:47.604464] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:43.098 [2024-11-20 07:19:47.604648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:43.098 [2024-11-20 07:19:47.604662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x797390 with addr=10.0.0.2, port=4420 00:23:43.098 [2024-11-20 07:19:47.604671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x797390 is same with the state(6) to be set 00:23:43.098 [2024-11-20 07:19:47.604681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x797390 (9): Bad file descriptor 00:23:43.098 [2024-11-20 07:19:47.604707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:43.098 [2024-11-20 07:19:47.604716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:43.098 [2024-11-20 07:19:47.604724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:43.098 [2024-11-20 07:19:47.604730] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:43.098 [2024-11-20 07:19:47.604735] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:43.098 [2024-11-20 07:19:47.604746] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:43.098 [2024-11-20 07:19:47.614495] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:43.098 [2024-11-20 07:19:47.614506] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:43.098 [2024-11-20 07:19:47.614510] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:43.098 [2024-11-20 07:19:47.614514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:43.098 [2024-11-20 07:19:47.614527] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:43.098 [2024-11-20 07:19:47.614772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:43.098 [2024-11-20 07:19:47.614784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x797390 with addr=10.0.0.2, port=4420 00:23:43.098 [2024-11-20 07:19:47.614792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x797390 is same with the state(6) to be set 00:23:43.098 [2024-11-20 07:19:47.614803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x797390 (9): Bad file descriptor 00:23:43.098 [2024-11-20 07:19:47.614812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:43.098 [2024-11-20 07:19:47.614819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:43.098 [2024-11-20 07:19:47.614826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:43.098 [2024-11-20 07:19:47.614832] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:43.098 [2024-11-20 07:19:47.614837] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:43.098 [2024-11-20 07:19:47.614841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:43.098 [2024-11-20 07:19:47.620369] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:43.098 [2024-11-20 07:19:47.620385] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:43.098 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.098 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:43.098 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:43.098 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:43.098 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:43.098 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:43.098 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:43.098 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:43.098 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:23:43.098 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:43.098 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:43.098 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:43.098 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:43.098 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.098 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:23:43.357 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:43.358 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:43.358 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.358 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:43.358 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.358 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:43.358 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:43.358 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:23:43.358 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:23:43.358 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:43.358 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.358 07:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.731 [2024-11-20 07:19:48.905070] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:44.731 [2024-11-20 07:19:48.905085] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:44.731 [2024-11-20 07:19:48.905095] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:44.731 [2024-11-20 07:19:49.033484] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:44.731 [2024-11-20 07:19:49.099097] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:23:44.731 [2024-11-20 07:19:49.099687] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x7945a0:1 started. 00:23:44.731 [2024-11-20 07:19:49.101282] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:44.731 [2024-11-20 07:19:49.101307] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:44.731 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.731 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:44.731 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:23:44.731 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:44.731 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:44.731 [2024-11-20 07:19:49.104530] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x7945a0 was disconnected and freed. delete nvme_qpair. 00:23:44.731 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:44.731 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:44.731 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:44.731 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:44.731 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.731 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.731 request: 00:23:44.731 { 00:23:44.731 "name": "nvme", 00:23:44.731 "trtype": "tcp", 00:23:44.731 "traddr": "10.0.0.2", 00:23:44.731 "adrfam": "ipv4", 00:23:44.731 "trsvcid": "8009", 00:23:44.731 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:44.731 "wait_for_attach": true, 00:23:44.731 "method": "bdev_nvme_start_discovery", 00:23:44.731 "req_id": 1 00:23:44.731 } 00:23:44.731 Got JSON-RPC error response 00:23:44.731 response: 00:23:44.731 { 00:23:44.731 "code": -17, 00:23:44.731 "message": "File exists" 00:23:44.731 } 00:23:44.731 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:44.731 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:23:44.731 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:44.731 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:44.731 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:44.731 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:44.731 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:44.731 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:44.731 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.731 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:44.731 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.731 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:44.731 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.732 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:44.732 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:44.732 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:44.732 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:44.732 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.732 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:44.732 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.732 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:44.732 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.732 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:44.732 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:44.732 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:23:44.732 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:44.732 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:44.732 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:44.732 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:44.732 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:44.732 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:44.732 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.732 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.732 request: 00:23:44.732 { 00:23:44.732 "name": "nvme_second", 00:23:44.732 "trtype": "tcp", 00:23:44.732 "traddr": "10.0.0.2", 00:23:44.732 "adrfam": "ipv4", 00:23:44.732 "trsvcid": "8009", 00:23:44.732 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:44.732 "wait_for_attach": true, 00:23:44.732 "method": "bdev_nvme_start_discovery", 00:23:44.732 "req_id": 1 00:23:44.732 } 00:23:44.732 Got JSON-RPC error response 00:23:44.732 response: 00:23:44.732 { 00:23:44.732 "code": -17, 00:23:44.732 "message": "File exists" 00:23:44.732 } 00:23:44.732 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:44.732 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:23:44.732 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:44.732 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:44.732 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:44.732 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:44.732 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:44.732 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:44.732 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.732 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:44.732 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.732 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:44.732 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.990 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:44.990 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:44.990 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:44.990 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:44.990 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.990 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:44.990 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:44.990 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:44.990 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.990 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:44.990 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:44.990 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:23:44.990 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:44.990 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:44.990 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:44.990 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:44.990 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:44.990 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:44.990 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.990 07:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:45.925 [2024-11-20 07:19:50.336680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.925 [2024-11-20 07:19:50.336713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f4560 with addr=10.0.0.2, port=8010 00:23:45.925 [2024-11-20 07:19:50.336730] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:45.925 [2024-11-20 07:19:50.336738] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:45.925 [2024-11-20 07:19:50.336744] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:46.862 [2024-11-20 07:19:51.339166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.862 [2024-11-20 07:19:51.339192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8f4560 with addr=10.0.0.2, port=8010 00:23:46.862 [2024-11-20 07:19:51.339205] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:46.862 [2024-11-20 07:19:51.339212] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:46.862 [2024-11-20 07:19:51.339219] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:47.798 [2024-11-20 07:19:52.341354] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:47.798 request: 00:23:47.798 { 00:23:47.799 "name": "nvme_second", 00:23:47.799 "trtype": "tcp", 00:23:47.799 "traddr": "10.0.0.2", 00:23:47.799 "adrfam": "ipv4", 00:23:47.799 "trsvcid": "8010", 00:23:47.799 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:47.799 "wait_for_attach": false, 00:23:47.799 "attach_timeout_ms": 3000, 00:23:47.799 "method": "bdev_nvme_start_discovery", 00:23:47.799 "req_id": 1 00:23:47.799 } 00:23:47.799 Got JSON-RPC error response 00:23:47.799 response: 00:23:47.799 { 00:23:47.799 "code": -110, 00:23:47.799 "message": "Connection timed out" 00:23:47.799 } 00:23:47.799 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:47.799 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:23:47.799 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:47.799 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:47.799 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:48.057 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:48.057 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:48.057 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:48.057 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.057 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:48.057 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:48.057 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:48.057 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.057 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:48.058 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:48.058 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1295935 00:23:48.058 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:48.058 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:48.058 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:23:48.058 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:48.058 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:23:48.058 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:48.058 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:48.058 rmmod nvme_tcp 00:23:48.058 rmmod nvme_fabrics 00:23:48.058 rmmod nvme_keyring 00:23:48.058 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:48.058 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:23:48.058 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:23:48.058 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 1295828 ']' 00:23:48.058 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 1295828 00:23:48.058 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 1295828 ']' 00:23:48.058 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 1295828 00:23:48.058 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:23:48.058 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:48.058 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1295828 00:23:48.058 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:48.058 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:48.058 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1295828' 00:23:48.058 killing process with pid 1295828 00:23:48.058 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 1295828 00:23:48.058 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 1295828 00:23:48.318 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:48.318 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:48.318 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:48.318 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:23:48.318 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:23:48.318 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:48.318 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:23:48.318 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:48.318 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:48.318 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.318 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:48.318 07:19:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.224 07:19:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:50.224 00:23:50.224 real 0m17.293s 00:23:50.224 user 0m20.626s 00:23:50.224 sys 0m5.888s 00:23:50.224 07:19:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:50.224 07:19:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:50.224 ************************************ 00:23:50.224 END TEST nvmf_host_discovery 00:23:50.224 ************************************ 00:23:50.484 07:19:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:50.484 07:19:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:50.484 07:19:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:50.484 07:19:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.484 ************************************ 00:23:50.484 START TEST nvmf_host_multipath_status 00:23:50.484 ************************************ 00:23:50.484 07:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:50.484 * Looking for test storage... 00:23:50.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:50.484 07:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:50.484 07:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:23:50.484 07:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:50.484 07:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:50.484 07:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:50.484 07:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:50.484 07:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:50.484 07:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:23:50.484 07:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:23:50.484 07:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:23:50.484 07:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:23:50.484 07:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:23:50.484 07:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:23:50.484 07:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:23:50.484 07:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:50.484 07:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:23:50.484 07:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:23:50.484 07:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:50.484 07:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:50.484 07:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:23:50.484 07:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:23:50.484 07:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:50.484 07:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:23:50.484 07:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:23:50.484 07:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:23:50.484 07:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:23:50.484 07:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:50.484 07:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:23:50.484 07:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:23:50.485 07:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:50.485 07:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:50.485 07:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:23:50.485 07:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:50.485 07:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:50.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.485 --rc genhtml_branch_coverage=1 00:23:50.485 --rc genhtml_function_coverage=1 00:23:50.485 --rc genhtml_legend=1 00:23:50.485 --rc geninfo_all_blocks=1 00:23:50.485 --rc geninfo_unexecuted_blocks=1 00:23:50.485 00:23:50.485 ' 00:23:50.485 07:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:50.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.485 --rc genhtml_branch_coverage=1 00:23:50.485 --rc genhtml_function_coverage=1 00:23:50.485 --rc genhtml_legend=1 00:23:50.485 --rc geninfo_all_blocks=1 00:23:50.485 --rc geninfo_unexecuted_blocks=1 00:23:50.485 00:23:50.485 ' 00:23:50.485 07:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:50.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.485 --rc genhtml_branch_coverage=1 00:23:50.485 --rc genhtml_function_coverage=1 00:23:50.485 --rc genhtml_legend=1 00:23:50.485 --rc geninfo_all_blocks=1 00:23:50.485 --rc geninfo_unexecuted_blocks=1 00:23:50.485 00:23:50.485 ' 00:23:50.485 07:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:50.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:50.485 --rc genhtml_branch_coverage=1 00:23:50.485 --rc genhtml_function_coverage=1 00:23:50.485 --rc genhtml_legend=1 00:23:50.485 --rc geninfo_all_blocks=1 00:23:50.485 --rc geninfo_unexecuted_blocks=1 00:23:50.485 00:23:50.485 ' 00:23:50.485 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:50.485 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:50.485 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:50.485 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:50.485 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:50.485 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:50.485 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:50.485 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:50.485 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:50.485 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:50.485 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:50.485 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:50.485 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:50.485 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:50.485 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:50.485 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:50.485 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:50.485 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:50.485 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:50.485 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:23:50.485 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:50.485 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:50.485 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:50.485 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.485 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.485 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.485 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:50.485 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.485 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:23:50.485 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:50.485 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:50.485 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:50.485 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:50.485 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:50.485 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:50.485 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:50.485 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:50.485 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:50.485 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:50.485 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:50.485 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:50.485 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:50.485 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:50.485 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:50.485 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:50.746 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:50.746 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:50.746 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:50.746 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:50.746 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:50.746 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:50.746 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.746 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:50.746 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.746 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:50.746 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:50.746 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:23:50.746 07:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:57.317 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:57.317 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.317 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:57.318 Found net devices under 0000:86:00.0: cvl_0_0 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:57.318 Found net devices under 0000:86:00.1: cvl_0_1 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:57.318 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:57.318 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.430 ms 00:23:57.318 00:23:57.318 --- 10.0.0.2 ping statistics --- 00:23:57.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.318 rtt min/avg/max/mdev = 0.430/0.430/0.430/0.000 ms 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:57.318 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:57.318 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:23:57.318 00:23:57.318 --- 10.0.0.1 ping statistics --- 00:23:57.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.318 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=1300919 00:23:57.318 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:57.319 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 1300919 00:23:57.319 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 1300919 ']' 00:23:57.319 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.319 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:57.319 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.319 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:57.319 07:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:57.319 [2024-11-20 07:20:01.017786] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:23:57.319 [2024-11-20 07:20:01.017835] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:57.319 [2024-11-20 07:20:01.101162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:57.319 [2024-11-20 07:20:01.143192] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:57.319 [2024-11-20 07:20:01.143228] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:57.319 [2024-11-20 07:20:01.143236] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:57.319 [2024-11-20 07:20:01.143242] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:57.319 [2024-11-20 07:20:01.143248] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:57.319 [2024-11-20 07:20:01.144478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.319 [2024-11-20 07:20:01.144479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.319 07:20:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:57.319 07:20:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:23:57.319 07:20:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:57.319 07:20:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:57.319 07:20:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:57.319 07:20:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:57.319 07:20:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1300919 00:23:57.319 07:20:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:57.319 [2024-11-20 07:20:01.466727] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:57.319 07:20:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:57.319 Malloc0 00:23:57.319 07:20:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:57.578 07:20:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:57.578 07:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:57.837 [2024-11-20 07:20:02.282828] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:57.837 07:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:58.096 [2024-11-20 07:20:02.471269] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:58.096 07:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:58.096 07:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1301202 00:23:58.096 07:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:58.096 07:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1301202 /var/tmp/bdevperf.sock 00:23:58.096 07:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 1301202 ']' 00:23:58.096 07:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:58.096 07:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:58.096 07:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:58.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:58.096 07:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:58.096 07:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:58.355 07:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:58.355 07:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:23:58.355 07:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:58.615 07:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:58.874 Nvme0n1 00:23:59.132 07:20:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:59.391 Nvme0n1 00:23:59.391 07:20:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:59.391 07:20:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:01.928 07:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:01.928 07:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:01.928 07:20:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:01.928 07:20:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:02.865 07:20:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:02.865 07:20:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:02.865 07:20:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.865 07:20:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:03.125 07:20:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.125 07:20:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:03.125 07:20:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.125 07:20:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:03.384 07:20:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:03.384 07:20:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:03.384 07:20:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.384 07:20:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:03.643 07:20:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.643 07:20:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:03.643 07:20:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.643 07:20:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:03.643 07:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.643 07:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:03.643 07:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.643 07:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:03.902 07:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.902 07:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:03.902 07:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.902 07:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:04.160 07:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.160 07:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:04.160 07:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:04.420 07:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:04.679 07:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:05.615 07:20:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:05.615 07:20:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:05.615 07:20:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.615 07:20:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:05.874 07:20:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:05.874 07:20:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:05.874 07:20:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.874 07:20:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:05.874 07:20:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:05.874 07:20:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:05.874 07:20:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.874 07:20:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:06.134 07:20:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.134 07:20:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:06.134 07:20:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.134 07:20:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:06.393 07:20:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.393 07:20:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:06.393 07:20:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.393 07:20:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:06.651 07:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.651 07:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:06.651 07:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.651 07:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:06.910 07:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.910 07:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:06.910 07:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:06.910 07:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:07.168 07:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:08.545 07:20:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:08.545 07:20:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:08.545 07:20:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.545 07:20:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:08.545 07:20:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:08.545 07:20:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:08.545 07:20:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.545 07:20:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:08.545 07:20:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:08.545 07:20:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:08.545 07:20:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.545 07:20:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:08.804 07:20:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:08.804 07:20:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:08.804 07:20:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.804 07:20:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:09.063 07:20:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.063 07:20:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:09.063 07:20:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.063 07:20:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:09.322 07:20:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.322 07:20:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:09.322 07:20:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:09.322 07:20:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:09.581 07:20:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.581 07:20:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:09.581 07:20:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:09.840 07:20:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:09.840 07:20:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:11.218 07:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:11.218 07:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:11.218 07:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.218 07:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:11.218 07:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.218 07:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:11.218 07:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.218 07:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:11.477 07:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:11.477 07:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:11.477 07:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.477 07:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:11.477 07:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.477 07:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:11.477 07:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.477 07:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:11.736 07:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.736 07:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:11.736 07:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.736 07:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:11.995 07:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.995 07:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:11.995 07:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.995 07:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:12.254 07:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:12.254 07:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:12.255 07:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:12.513 07:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:12.513 07:20:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:13.890 07:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:13.890 07:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:13.890 07:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.890 07:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:13.890 07:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:13.890 07:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:13.890 07:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.890 07:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:14.150 07:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:14.150 07:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:14.150 07:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.150 07:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:14.150 07:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.150 07:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:14.150 07:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.150 07:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:14.408 07:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.408 07:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:14.408 07:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.408 07:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:14.667 07:20:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:14.667 07:20:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:14.667 07:20:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.667 07:20:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:14.924 07:20:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:14.924 07:20:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:14.924 07:20:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:14.924 07:20:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:15.182 07:20:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:16.118 07:20:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:16.118 07:20:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:16.118 07:20:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.118 07:20:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:16.377 07:20:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:16.377 07:20:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:16.377 07:20:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.377 07:20:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:16.637 07:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.637 07:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:16.637 07:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.637 07:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:16.896 07:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.896 07:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:16.896 07:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.896 07:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:17.156 07:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.156 07:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:17.156 07:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.156 07:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:17.156 07:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:17.156 07:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:17.156 07:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.156 07:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:17.416 07:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.416 07:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:17.674 07:20:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:17.674 07:20:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:17.933 07:20:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:18.192 07:20:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:19.129 07:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:19.129 07:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:19.129 07:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.129 07:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:19.388 07:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.388 07:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:19.388 07:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.388 07:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:19.647 07:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.647 07:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:19.647 07:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.647 07:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:19.647 07:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.647 07:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:19.647 07:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:19.647 07:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.907 07:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.907 07:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:19.907 07:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.907 07:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:20.166 07:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.166 07:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:20.166 07:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.166 07:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:20.426 07:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.426 07:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:20.426 07:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:20.686 07:20:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:20.686 07:20:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:22.061 07:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:22.061 07:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:22.061 07:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.061 07:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:22.061 07:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:22.061 07:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:22.061 07:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.061 07:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:22.320 07:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.320 07:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:22.320 07:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.320 07:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:22.579 07:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.579 07:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:22.579 07:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:22.579 07:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.579 07:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.579 07:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:22.579 07:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.579 07:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:22.837 07:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.837 07:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:22.837 07:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.837 07:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:23.095 07:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:23.095 07:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:23.095 07:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:23.354 07:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:23.612 07:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:24.548 07:20:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:24.548 07:20:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:24.548 07:20:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.548 07:20:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:24.811 07:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.811 07:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:24.811 07:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.811 07:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:24.811 07:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.811 07:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:24.811 07:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.811 07:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:25.074 07:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:25.074 07:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:25.074 07:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.074 07:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:25.333 07:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:25.333 07:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:25.333 07:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.333 07:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:25.591 07:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:25.591 07:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:25.591 07:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.591 07:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:25.850 07:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:25.850 07:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:25.850 07:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:26.108 07:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:26.108 07:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:27.096 07:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:27.096 07:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:27.367 07:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.368 07:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:27.368 07:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:27.368 07:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:27.368 07:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.368 07:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:27.637 07:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:27.637 07:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:27.637 07:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.637 07:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:27.896 07:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:27.896 07:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:27.896 07:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:27.896 07:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:28.156 07:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:28.156 07:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:28.156 07:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:28.156 07:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:28.415 07:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:28.415 07:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:28.415 07:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:28.415 07:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:28.415 07:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:28.415 07:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1301202 00:24:28.415 07:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 1301202 ']' 00:24:28.415 07:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 1301202 00:24:28.415 07:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:24:28.415 07:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:28.415 07:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1301202 00:24:28.678 07:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:24:28.678 07:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:24:28.678 07:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1301202' 00:24:28.678 killing process with pid 1301202 00:24:28.678 07:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 1301202 00:24:28.678 07:20:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 1301202 00:24:28.678 { 00:24:28.678 "results": [ 00:24:28.678 { 00:24:28.678 "job": "Nvme0n1", 00:24:28.678 "core_mask": "0x4", 00:24:28.678 "workload": "verify", 00:24:28.678 "status": "terminated", 00:24:28.678 "verify_range": { 00:24:28.678 "start": 0, 00:24:28.678 "length": 16384 00:24:28.678 }, 00:24:28.678 "queue_depth": 128, 00:24:28.678 "io_size": 4096, 00:24:28.678 "runtime": 28.955271, 00:24:28.678 "iops": 10523.33442156352, 00:24:28.678 "mibps": 41.1067750842325, 00:24:28.678 "io_failed": 0, 00:24:28.678 "io_timeout": 0, 00:24:28.678 "avg_latency_us": 12143.454491402832, 00:24:28.678 "min_latency_us": 616.1808695652173, 00:24:28.678 "max_latency_us": 3019898.88 00:24:28.678 } 00:24:28.678 ], 00:24:28.678 "core_count": 1 00:24:28.678 } 00:24:28.678 07:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1301202 00:24:28.678 07:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:28.678 [2024-11-20 07:20:02.531567] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:24:28.678 [2024-11-20 07:20:02.531623] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1301202 ] 00:24:28.678 [2024-11-20 07:20:02.607622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.678 [2024-11-20 07:20:02.648944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:28.678 Running I/O for 90 seconds... 00:24:28.678 11237.00 IOPS, 43.89 MiB/s [2024-11-20T06:20:33.234Z] 11304.00 IOPS, 44.16 MiB/s [2024-11-20T06:20:33.234Z] 11288.33 IOPS, 44.10 MiB/s [2024-11-20T06:20:33.235Z] 11277.75 IOPS, 44.05 MiB/s [2024-11-20T06:20:33.235Z] 11292.00 IOPS, 44.11 MiB/s [2024-11-20T06:20:33.235Z] 11325.17 IOPS, 44.24 MiB/s [2024-11-20T06:20:33.235Z] 11333.86 IOPS, 44.27 MiB/s [2024-11-20T06:20:33.235Z] 11340.38 IOPS, 44.30 MiB/s [2024-11-20T06:20:33.235Z] 11349.44 IOPS, 44.33 MiB/s [2024-11-20T06:20:33.235Z] 11346.40 IOPS, 44.32 MiB/s [2024-11-20T06:20:33.235Z] 11357.91 IOPS, 44.37 MiB/s [2024-11-20T06:20:33.235Z] 11349.00 IOPS, 44.33 MiB/s [2024-11-20T06:20:33.235Z] [2024-11-20 07:20:16.804378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:114520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.679 [2024-11-20 07:20:16.804418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:28.679 [2024-11-20 07:20:16.804453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:114528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.679 [2024-11-20 07:20:16.804462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:28.679 [2024-11-20 07:20:16.804475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:114536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.679 [2024-11-20 07:20:16.804483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:28.679 [2024-11-20 07:20:16.804496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:114544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.679 [2024-11-20 07:20:16.804504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:28.679 [2024-11-20 07:20:16.804517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:114552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.679 [2024-11-20 07:20:16.804523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:28.679 [2024-11-20 07:20:16.804536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:114560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.679 [2024-11-20 07:20:16.804543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:28.679 [2024-11-20 07:20:16.804555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:114568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.679 [2024-11-20 07:20:16.804562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:28.679 [2024-11-20 07:20:16.804575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:114576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.679 [2024-11-20 07:20:16.804581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:28.679 [2024-11-20 07:20:16.804594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:114584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.679 [2024-11-20 07:20:16.804601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:28.679 [2024-11-20 07:20:16.804614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:114592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.679 [2024-11-20 07:20:16.804626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:28.679 [2024-11-20 07:20:16.804638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:114600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.679 [2024-11-20 07:20:16.804645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:28.679 [2024-11-20 07:20:16.804657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:114608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.679 [2024-11-20 07:20:16.804664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:28.679 [2024-11-20 07:20:16.804677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:114616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.679 [2024-11-20 07:20:16.804684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:28.679 [2024-11-20 07:20:16.804696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:114624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.679 [2024-11-20 07:20:16.804703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:28.679 [2024-11-20 07:20:16.804716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:114632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.679 [2024-11-20 07:20:16.804723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:28.679 [2024-11-20 07:20:16.804736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:114640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.679 [2024-11-20 07:20:16.804743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:28.679 [2024-11-20 07:20:16.804756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:114648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.679 [2024-11-20 07:20:16.804762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:28.679 [2024-11-20 07:20:16.804776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:114656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.679 [2024-11-20 07:20:16.804782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:28.679 [2024-11-20 07:20:16.804795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:114664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.679 [2024-11-20 07:20:16.804801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:28.679 [2024-11-20 07:20:16.804814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:114672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.679 [2024-11-20 07:20:16.804820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:28.679 [2024-11-20 07:20:16.804833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:114680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.679 [2024-11-20 07:20:16.804840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.679 [2024-11-20 07:20:16.804852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:114688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.679 [2024-11-20 07:20:16.804861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:28.679 [2024-11-20 07:20:16.804873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:114696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.679 [2024-11-20 07:20:16.804880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:28.679 [2024-11-20 07:20:16.804892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:114704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.679 [2024-11-20 07:20:16.804899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:28.679 [2024-11-20 07:20:16.804912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:114712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.679 [2024-11-20 07:20:16.804918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:28.679 [2024-11-20 07:20:16.804931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:114720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.679 [2024-11-20 07:20:16.804938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:28.679 [2024-11-20 07:20:16.804956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:114728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.679 [2024-11-20 07:20:16.804965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:28.679 [2024-11-20 07:20:16.804977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:114736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.679 [2024-11-20 07:20:16.804984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:28.679 [2024-11-20 07:20:16.804997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:114744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.679 [2024-11-20 07:20:16.805004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:28.679 [2024-11-20 07:20:16.805017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:114752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.680 [2024-11-20 07:20:16.805023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:28.680 [2024-11-20 07:20:16.805036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:114760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.680 [2024-11-20 07:20:16.805043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:28.680 [2024-11-20 07:20:16.805055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:114768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.680 [2024-11-20 07:20:16.805062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:28.680 [2024-11-20 07:20:16.805074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:114776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.680 [2024-11-20 07:20:16.805081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:28.680 [2024-11-20 07:20:16.805096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:114784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.680 [2024-11-20 07:20:16.805103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:28.680 [2024-11-20 07:20:16.805191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:114792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.680 [2024-11-20 07:20:16.805201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:28.680 [2024-11-20 07:20:16.805216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.680 [2024-11-20 07:20:16.805223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:28.680 [2024-11-20 07:20:16.805237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:114808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.680 [2024-11-20 07:20:16.805244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:28.680 [2024-11-20 07:20:16.805258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:114816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.680 [2024-11-20 07:20:16.805265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:28.680 [2024-11-20 07:20:16.805279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:114824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.680 [2024-11-20 07:20:16.805286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:28.680 [2024-11-20 07:20:16.805300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.680 [2024-11-20 07:20:16.805307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:28.680 [2024-11-20 07:20:16.805321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:114840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.680 [2024-11-20 07:20:16.805328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:28.680 [2024-11-20 07:20:16.805342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:114848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.680 [2024-11-20 07:20:16.805349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:28.680 [2024-11-20 07:20:16.805364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:114856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.680 [2024-11-20 07:20:16.805371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:28.680 [2024-11-20 07:20:16.805385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:114864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.680 [2024-11-20 07:20:16.805392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:28.680 [2024-11-20 07:20:16.805407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:114872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.680 [2024-11-20 07:20:16.805414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:28.680 [2024-11-20 07:20:16.805427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:114880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.680 [2024-11-20 07:20:16.805434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:28.680 [2024-11-20 07:20:16.805450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:114888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.680 [2024-11-20 07:20:16.805457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:28.680 [2024-11-20 07:20:16.805472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.680 [2024-11-20 07:20:16.805478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:28.680 [2024-11-20 07:20:16.805493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:114904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.680 [2024-11-20 07:20:16.805500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:28.680 [2024-11-20 07:20:16.805516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.680 [2024-11-20 07:20:16.805523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:28.680 [2024-11-20 07:20:16.805538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:114920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.680 [2024-11-20 07:20:16.805544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:28.680 [2024-11-20 07:20:16.805558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:114928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.680 [2024-11-20 07:20:16.805565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:28.680 [2024-11-20 07:20:16.805579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:114936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.680 [2024-11-20 07:20:16.805591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.680 [2024-11-20 07:20:16.805605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.680 [2024-11-20 07:20:16.805612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:28.680 [2024-11-20 07:20:16.805626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:114952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.680 [2024-11-20 07:20:16.805633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:28.680 [2024-11-20 07:20:16.805648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:114960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.680 [2024-11-20 07:20:16.805654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:28.680 [2024-11-20 07:20:16.805668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:114968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.680 [2024-11-20 07:20:16.805675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:28.680 [2024-11-20 07:20:16.805689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:114976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.680 [2024-11-20 07:20:16.805696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:28.680 [2024-11-20 07:20:16.805710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:114984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.680 [2024-11-20 07:20:16.805718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:28.680 [2024-11-20 07:20:16.805733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:114992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.680 [2024-11-20 07:20:16.805739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:28.680 [2024-11-20 07:20:16.805754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:115000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.680 [2024-11-20 07:20:16.805761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:28.680 [2024-11-20 07:20:16.805775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:115008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.681 [2024-11-20 07:20:16.805781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:28.681 [2024-11-20 07:20:16.805795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:115016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.681 [2024-11-20 07:20:16.805802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:28.681 [2024-11-20 07:20:16.805816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:115024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.681 [2024-11-20 07:20:16.805822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:28.681 [2024-11-20 07:20:16.806039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:115032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.681 [2024-11-20 07:20:16.806048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:28.681 [2024-11-20 07:20:16.806065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:115040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.681 [2024-11-20 07:20:16.806073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:28.681 [2024-11-20 07:20:16.806089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:115048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.681 [2024-11-20 07:20:16.806096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:28.681 [2024-11-20 07:20:16.806111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:115056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.681 [2024-11-20 07:20:16.806118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:28.681 [2024-11-20 07:20:16.806135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:115064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.681 [2024-11-20 07:20:16.806142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:28.681 [2024-11-20 07:20:16.806158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:115072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.681 [2024-11-20 07:20:16.806164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:28.681 [2024-11-20 07:20:16.806180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:115080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.681 [2024-11-20 07:20:16.806189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:28.681 [2024-11-20 07:20:16.806205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:115088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.681 [2024-11-20 07:20:16.806212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:28.681 [2024-11-20 07:20:16.806228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:115096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.681 [2024-11-20 07:20:16.806234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:28.681 [2024-11-20 07:20:16.806250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:115104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.681 [2024-11-20 07:20:16.806257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:28.681 [2024-11-20 07:20:16.806273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:115112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.681 [2024-11-20 07:20:16.806279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:28.681 [2024-11-20 07:20:16.806295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:115120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.681 [2024-11-20 07:20:16.806302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:28.681 [2024-11-20 07:20:16.806318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.681 [2024-11-20 07:20:16.806325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:28.681 [2024-11-20 07:20:16.806340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:115136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.681 [2024-11-20 07:20:16.806347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:28.681 [2024-11-20 07:20:16.806363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:115144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.681 [2024-11-20 07:20:16.806370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:28.681 [2024-11-20 07:20:16.806385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:115152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.681 [2024-11-20 07:20:16.806392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:28.681 [2024-11-20 07:20:16.806408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:115160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.681 [2024-11-20 07:20:16.806415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:28.681 [2024-11-20 07:20:16.806431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:115168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.681 [2024-11-20 07:20:16.806438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:28.681 [2024-11-20 07:20:16.806453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:115176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.681 [2024-11-20 07:20:16.806460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:28.681 [2024-11-20 07:20:16.806477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:115184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.681 [2024-11-20 07:20:16.806485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:28.681 [2024-11-20 07:20:16.806502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:115192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.681 [2024-11-20 07:20:16.806509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.681 [2024-11-20 07:20:16.806524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:115200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.681 [2024-11-20 07:20:16.806531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:28.681 [2024-11-20 07:20:16.806547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:115208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.681 [2024-11-20 07:20:16.806554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:28.681 [2024-11-20 07:20:16.806570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:115216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.681 [2024-11-20 07:20:16.806577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:28.681 [2024-11-20 07:20:16.806592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:115224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.681 [2024-11-20 07:20:16.806599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:28.681 [2024-11-20 07:20:16.806615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:115232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.681 [2024-11-20 07:20:16.806621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:28.681 [2024-11-20 07:20:16.806637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:115240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.681 [2024-11-20 07:20:16.806644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:28.681 [2024-11-20 07:20:16.806660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:115248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.681 [2024-11-20 07:20:16.806667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:28.681 [2024-11-20 07:20:16.806683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:115256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.682 [2024-11-20 07:20:16.806690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:28.682 [2024-11-20 07:20:16.806705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:115264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.682 [2024-11-20 07:20:16.806712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:28.682 [2024-11-20 07:20:16.806727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:115272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.682 [2024-11-20 07:20:16.806734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:28.682 [2024-11-20 07:20:16.806752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:115280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.682 [2024-11-20 07:20:16.806759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:28.682 [2024-11-20 07:20:16.806774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:115288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.682 [2024-11-20 07:20:16.806781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:28.682 [2024-11-20 07:20:16.806797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:115296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.682 [2024-11-20 07:20:16.806804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:28.682 [2024-11-20 07:20:16.806882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:115304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.682 [2024-11-20 07:20:16.806891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:28.682 [2024-11-20 07:20:16.806909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:115312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.682 [2024-11-20 07:20:16.806918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:28.682 [2024-11-20 07:20:16.806936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:115320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.682 [2024-11-20 07:20:16.806943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:28.682 [2024-11-20 07:20:16.806964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:115328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.682 [2024-11-20 07:20:16.806972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:28.682 [2024-11-20 07:20:16.806989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:115336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.682 [2024-11-20 07:20:16.806996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:28.682 [2024-11-20 07:20:16.807013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:115344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.682 [2024-11-20 07:20:16.807020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:28.682 [2024-11-20 07:20:16.807038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:115352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.682 [2024-11-20 07:20:16.807044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:28.682 [2024-11-20 07:20:16.807062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:115360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.682 [2024-11-20 07:20:16.807069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:28.682 [2024-11-20 07:20:16.807086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:115368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.682 [2024-11-20 07:20:16.807093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:28.682 [2024-11-20 07:20:16.807112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:115376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.682 [2024-11-20 07:20:16.807119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:28.682 [2024-11-20 07:20:16.807136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:115384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.682 [2024-11-20 07:20:16.807143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:28.682 [2024-11-20 07:20:16.807160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:115392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.682 [2024-11-20 07:20:16.807166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:28.682 [2024-11-20 07:20:16.807184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:115400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.682 [2024-11-20 07:20:16.807191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:28.682 [2024-11-20 07:20:16.807208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:114392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.682 [2024-11-20 07:20:16.807215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:28.682 [2024-11-20 07:20:16.807233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:114400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.682 [2024-11-20 07:20:16.807239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:28.682 [2024-11-20 07:20:16.807258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:114408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.682 [2024-11-20 07:20:16.807265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:28.682 [2024-11-20 07:20:16.807282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:114416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.682 [2024-11-20 07:20:16.807289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:28.682 [2024-11-20 07:20:16.807306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:114424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.682 [2024-11-20 07:20:16.807314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.682 [2024-11-20 07:20:16.807332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:114432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.682 [2024-11-20 07:20:16.807338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.682 [2024-11-20 07:20:16.807356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:114440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.682 [2024-11-20 07:20:16.807363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:28.682 [2024-11-20 07:20:16.807381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:114448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.682 [2024-11-20 07:20:16.807388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:28.682 [2024-11-20 07:20:16.807404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:114456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.682 [2024-11-20 07:20:16.807415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:28.682 [2024-11-20 07:20:16.807433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.682 [2024-11-20 07:20:16.807440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:28.682 [2024-11-20 07:20:16.807457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:114472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.682 [2024-11-20 07:20:16.807464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:28.682 [2024-11-20 07:20:16.807481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:114480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.682 [2024-11-20 07:20:16.807488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:28.682 [2024-11-20 07:20:16.807506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:114488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.682 [2024-11-20 07:20:16.807512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:28.683 [2024-11-20 07:20:16.807530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:114496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.683 [2024-11-20 07:20:16.807537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:28.683 [2024-11-20 07:20:16.807554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:114504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.683 [2024-11-20 07:20:16.807561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:28.683 [2024-11-20 07:20:16.807578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:115408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.683 [2024-11-20 07:20:16.807585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:28.683 [2024-11-20 07:20:16.807603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:114512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.683 [2024-11-20 07:20:16.807610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:28.683 11182.38 IOPS, 43.68 MiB/s [2024-11-20T06:20:33.239Z] 10383.64 IOPS, 40.56 MiB/s [2024-11-20T06:20:33.239Z] 9691.40 IOPS, 37.86 MiB/s [2024-11-20T06:20:33.239Z] 9222.31 IOPS, 36.02 MiB/s [2024-11-20T06:20:33.239Z] 9351.94 IOPS, 36.53 MiB/s [2024-11-20T06:20:33.239Z] 9449.72 IOPS, 36.91 MiB/s [2024-11-20T06:20:33.239Z] 9630.21 IOPS, 37.62 MiB/s [2024-11-20T06:20:33.239Z] 9821.10 IOPS, 38.36 MiB/s [2024-11-20T06:20:33.239Z] 10000.57 IOPS, 39.06 MiB/s [2024-11-20T06:20:33.239Z] 10064.23 IOPS, 39.31 MiB/s [2024-11-20T06:20:33.239Z] 10121.30 IOPS, 39.54 MiB/s [2024-11-20T06:20:33.239Z] 10177.96 IOPS, 39.76 MiB/s [2024-11-20T06:20:33.239Z] 10300.20 IOPS, 40.24 MiB/s [2024-11-20T06:20:33.239Z] 10410.65 IOPS, 40.67 MiB/s [2024-11-20T06:20:33.239Z] [2024-11-20 07:20:30.605077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:127320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.683 [2024-11-20 07:20:30.605121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:28.683 [2024-11-20 07:20:30.605154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:127336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.683 [2024-11-20 07:20:30.605163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:28.683 [2024-11-20 07:20:30.605182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:127352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.683 [2024-11-20 07:20:30.605189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:28.683 [2024-11-20 07:20:30.605212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:127368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.683 [2024-11-20 07:20:30.605219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:28.683 [2024-11-20 07:20:30.605232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:127384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.683 [2024-11-20 07:20:30.605239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:28.683 [2024-11-20 07:20:30.605251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:127400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.683 [2024-11-20 07:20:30.605258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:28.683 [2024-11-20 07:20:30.605270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:127416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.683 [2024-11-20 07:20:30.605276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:28.683 [2024-11-20 07:20:30.605289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:127432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.683 [2024-11-20 07:20:30.605295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:28.683 [2024-11-20 07:20:30.605307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:127448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.683 [2024-11-20 07:20:30.605314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:28.683 [2024-11-20 07:20:30.605326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:127464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.683 [2024-11-20 07:20:30.605333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:28.683 [2024-11-20 07:20:30.607916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:127480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.683 [2024-11-20 07:20:30.607938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:28.683 [2024-11-20 07:20:30.607961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:127496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.683 [2024-11-20 07:20:30.607969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:28.683 [2024-11-20 07:20:30.607982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.683 [2024-11-20 07:20:30.607989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:28.683 [2024-11-20 07:20:30.608001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:127528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.683 [2024-11-20 07:20:30.608008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:28.683 [2024-11-20 07:20:30.608020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.683 [2024-11-20 07:20:30.608031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:28.683 [2024-11-20 07:20:30.608044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:127560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.683 [2024-11-20 07:20:30.608050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:28.683 [2024-11-20 07:20:30.608063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:127576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.683 [2024-11-20 07:20:30.608070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:28.683 [2024-11-20 07:20:30.608082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:127592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.683 [2024-11-20 07:20:30.608090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:28.683 [2024-11-20 07:20:30.608102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:127608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.683 [2024-11-20 07:20:30.608109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:28.683 [2024-11-20 07:20:30.608121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:127624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.683 [2024-11-20 07:20:30.608128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:28.683 [2024-11-20 07:20:30.608141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:127640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.683 [2024-11-20 07:20:30.608148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:28.683 [2024-11-20 07:20:30.608160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:127656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.683 [2024-11-20 07:20:30.608167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:28.683 [2024-11-20 07:20:30.608179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:127672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.683 [2024-11-20 07:20:30.608186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:28.683 [2024-11-20 07:20:30.608199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:127688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.683 [2024-11-20 07:20:30.608206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.683 [2024-11-20 07:20:30.608218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:127704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.683 [2024-11-20 07:20:30.608225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:28.684 [2024-11-20 07:20:30.608237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:127720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.684 [2024-11-20 07:20:30.608244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:28.684 [2024-11-20 07:20:30.608256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:127736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.684 [2024-11-20 07:20:30.608265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:28.684 [2024-11-20 07:20:30.608278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:127752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.684 [2024-11-20 07:20:30.608284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:28.684 [2024-11-20 07:20:30.608297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:127768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.684 [2024-11-20 07:20:30.608303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:28.684 [2024-11-20 07:20:30.608316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:127784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.684 [2024-11-20 07:20:30.608323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:28.684 [2024-11-20 07:20:30.608335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:127800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.684 [2024-11-20 07:20:30.608342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:28.684 [2024-11-20 07:20:30.608354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:127816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.684 [2024-11-20 07:20:30.608361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:28.684 [2024-11-20 07:20:30.608374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:127832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.684 [2024-11-20 07:20:30.608381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:28.684 [2024-11-20 07:20:30.608393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:127848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.684 [2024-11-20 07:20:30.608400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:28.684 [2024-11-20 07:20:30.608412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:127304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.684 [2024-11-20 07:20:30.608419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:28.684 [2024-11-20 07:20:30.608432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:127864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.684 [2024-11-20 07:20:30.608439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:28.684 [2024-11-20 07:20:30.608452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:127880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.684 [2024-11-20 07:20:30.608458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:28.684 [2024-11-20 07:20:30.608471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:127896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.684 [2024-11-20 07:20:30.608478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:28.684 [2024-11-20 07:20:30.608491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:127912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.684 [2024-11-20 07:20:30.608498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:28.684 [2024-11-20 07:20:30.608512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:127928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.684 [2024-11-20 07:20:30.608518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:28.684 [2024-11-20 07:20:30.608532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.684 [2024-11-20 07:20:30.608540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:28.684 10479.48 IOPS, 40.94 MiB/s [2024-11-20T06:20:33.240Z] 10507.36 IOPS, 41.04 MiB/s [2024-11-20T06:20:33.240Z] Received shutdown signal, test time was about 28.955919 seconds 00:24:28.684 00:24:28.684 Latency(us) 00:24:28.684 [2024-11-20T06:20:33.240Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.684 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:28.684 Verification LBA range: start 0x0 length 0x4000 00:24:28.684 Nvme0n1 : 28.96 10523.33 41.11 0.00 0.00 12143.45 616.18 3019898.88 00:24:28.684 [2024-11-20T06:20:33.240Z] =================================================================================================================== 00:24:28.684 [2024-11-20T06:20:33.240Z] Total : 10523.33 41.11 0.00 0.00 12143.45 616.18 3019898.88 00:24:28.684 07:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:28.944 07:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:28.944 07:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:28.944 07:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:28.944 07:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:28.944 07:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:24:28.944 07:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:28.944 07:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:24:28.944 07:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:28.944 07:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:28.944 rmmod nvme_tcp 00:24:28.944 rmmod nvme_fabrics 00:24:28.944 rmmod nvme_keyring 00:24:28.944 07:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:28.944 07:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:24:28.944 07:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:24:28.944 07:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 1300919 ']' 00:24:28.944 07:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 1300919 00:24:28.944 07:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 1300919 ']' 00:24:28.944 07:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 1300919 00:24:28.944 07:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:24:28.944 07:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:28.944 07:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1300919 00:24:28.944 07:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:28.944 07:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:28.944 07:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1300919' 00:24:28.944 killing process with pid 1300919 00:24:28.944 07:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 1300919 00:24:28.944 07:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 1300919 00:24:29.204 07:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:29.204 07:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:29.204 07:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:29.204 07:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:24:29.204 07:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:24:29.204 07:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:29.204 07:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:24:29.204 07:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:29.204 07:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:29.204 07:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.204 07:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:29.204 07:20:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.742 07:20:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:31.742 00:24:31.742 real 0m40.901s 00:24:31.742 user 1m51.157s 00:24:31.742 sys 0m11.515s 00:24:31.742 07:20:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:31.742 07:20:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:31.742 ************************************ 00:24:31.742 END TEST nvmf_host_multipath_status 00:24:31.742 ************************************ 00:24:31.742 07:20:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:31.742 07:20:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:31.742 07:20:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:31.742 07:20:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.742 ************************************ 00:24:31.742 START TEST nvmf_discovery_remove_ifc 00:24:31.742 ************************************ 00:24:31.742 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:31.742 * Looking for test storage... 00:24:31.742 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:31.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.743 --rc genhtml_branch_coverage=1 00:24:31.743 --rc genhtml_function_coverage=1 00:24:31.743 --rc genhtml_legend=1 00:24:31.743 --rc geninfo_all_blocks=1 00:24:31.743 --rc geninfo_unexecuted_blocks=1 00:24:31.743 00:24:31.743 ' 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:31.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.743 --rc genhtml_branch_coverage=1 00:24:31.743 --rc genhtml_function_coverage=1 00:24:31.743 --rc genhtml_legend=1 00:24:31.743 --rc geninfo_all_blocks=1 00:24:31.743 --rc geninfo_unexecuted_blocks=1 00:24:31.743 00:24:31.743 ' 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:31.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.743 --rc genhtml_branch_coverage=1 00:24:31.743 --rc genhtml_function_coverage=1 00:24:31.743 --rc genhtml_legend=1 00:24:31.743 --rc geninfo_all_blocks=1 00:24:31.743 --rc geninfo_unexecuted_blocks=1 00:24:31.743 00:24:31.743 ' 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:31.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.743 --rc genhtml_branch_coverage=1 00:24:31.743 --rc genhtml_function_coverage=1 00:24:31.743 --rc genhtml_legend=1 00:24:31.743 --rc geninfo_all_blocks=1 00:24:31.743 --rc geninfo_unexecuted_blocks=1 00:24:31.743 00:24:31.743 ' 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.743 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.744 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.744 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:31.744 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.744 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:24:31.744 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:31.744 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:31.744 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:31.744 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:31.744 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:31.744 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:31.744 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:31.744 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:31.744 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:31.744 07:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:31.744 07:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:31.744 07:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:31.744 07:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:31.744 07:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:31.744 07:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:31.744 07:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:31.744 07:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:31.744 07:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:31.744 07:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:31.744 07:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:31.744 07:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:31.744 07:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:31.744 07:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.744 07:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:31.744 07:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.744 07:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:31.744 07:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:31.744 07:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:24:31.744 07:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.318 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:38.318 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:24:38.318 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:38.318 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:38.318 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:38.318 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:38.318 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:38.318 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:24:38.318 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:38.318 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:24:38.318 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:24:38.318 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:24:38.318 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:24:38.318 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:24:38.318 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:24:38.318 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:38.318 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:38.318 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:38.318 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:38.319 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:38.319 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:38.319 Found net devices under 0000:86:00.0: cvl_0_0 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:38.319 Found net devices under 0000:86:00.1: cvl_0_1 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:38.319 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:38.320 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:38.320 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:38.320 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:38.320 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:38.320 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:38.320 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:38.320 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:38.320 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:38.320 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:38.320 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:38.320 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:38.320 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:38.320 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:38.320 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:38.320 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:38.320 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:38.320 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:38.320 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:38.320 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:38.320 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.346 ms 00:24:38.320 00:24:38.320 --- 10.0.0.2 ping statistics --- 00:24:38.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.320 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:24:38.320 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:38.320 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:38.320 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:24:38.320 00:24:38.320 --- 10.0.0.1 ping statistics --- 00:24:38.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.320 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:24:38.320 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:38.320 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:24:38.320 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:38.320 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:38.320 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:38.320 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:38.320 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:38.320 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:38.320 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:38.320 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:38.320 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:38.320 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:38.320 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.320 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=1309933 00:24:38.320 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 1309933 00:24:38.320 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:38.320 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 1309933 ']' 00:24:38.320 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.320 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:38.320 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.320 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:38.320 07:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.320 [2024-11-20 07:20:42.015767] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:24:38.320 [2024-11-20 07:20:42.015808] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:38.320 [2024-11-20 07:20:42.092323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.320 [2024-11-20 07:20:42.134185] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:38.320 [2024-11-20 07:20:42.134223] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:38.320 [2024-11-20 07:20:42.134231] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:38.320 [2024-11-20 07:20:42.134237] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:38.321 [2024-11-20 07:20:42.134243] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:38.321 [2024-11-20 07:20:42.134762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:38.321 07:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:38.321 07:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:24:38.321 07:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:38.321 07:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:38.321 07:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.321 07:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:38.321 07:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:38.321 07:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.321 07:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.321 [2024-11-20 07:20:42.292123] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:38.321 [2024-11-20 07:20:42.300316] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:38.321 null0 00:24:38.321 [2024-11-20 07:20:42.332287] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:38.321 07:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.321 07:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1309961 00:24:38.321 07:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1309961 /tmp/host.sock 00:24:38.321 07:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:38.321 07:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 1309961 ']' 00:24:38.321 07:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:24:38.321 07:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:38.321 07:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:38.321 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:38.321 07:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:38.321 07:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.321 [2024-11-20 07:20:42.403206] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:24:38.321 [2024-11-20 07:20:42.403247] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1309961 ] 00:24:38.321 [2024-11-20 07:20:42.476836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.321 [2024-11-20 07:20:42.520469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.321 07:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:38.321 07:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:24:38.321 07:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:38.321 07:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:38.321 07:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.321 07:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.321 07:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.321 07:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:38.321 07:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.321 07:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:38.321 07:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.321 07:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:38.321 07:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.321 07:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:39.259 [2024-11-20 07:20:43.699021] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:39.259 [2024-11-20 07:20:43.699042] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:39.259 [2024-11-20 07:20:43.699058] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:39.259 [2024-11-20 07:20:43.785329] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:39.520 [2024-11-20 07:20:43.880026] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:39.520 [2024-11-20 07:20:43.880727] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x95ca10:1 started. 00:24:39.520 [2024-11-20 07:20:43.882121] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:39.520 [2024-11-20 07:20:43.882160] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:39.520 [2024-11-20 07:20:43.882180] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:39.520 [2024-11-20 07:20:43.882194] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:39.520 [2024-11-20 07:20:43.882211] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:39.520 07:20:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.520 07:20:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:39.520 07:20:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:39.520 [2024-11-20 07:20:43.887651] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x95ca10 was disconnected and freed. delete nvme_qpair. 00:24:39.520 07:20:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:39.520 07:20:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:39.520 07:20:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.520 07:20:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:39.520 07:20:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:39.520 07:20:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:39.520 07:20:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.520 07:20:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:39.520 07:20:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:39.520 07:20:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:39.520 07:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:39.520 07:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:39.520 07:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:39.520 07:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.520 07:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:39.520 07:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:39.520 07:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:39.520 07:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:39.520 07:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.779 07:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:39.779 07:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:40.717 07:20:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:40.717 07:20:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:40.717 07:20:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:40.717 07:20:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.717 07:20:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:40.717 07:20:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:40.717 07:20:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:40.717 07:20:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.717 07:20:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:40.717 07:20:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:41.655 07:20:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:41.655 07:20:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:41.655 07:20:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:41.655 07:20:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.655 07:20:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:41.655 07:20:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:41.655 07:20:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:41.655 07:20:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.655 07:20:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:41.655 07:20:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:43.033 07:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:43.034 07:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:43.034 07:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:43.034 07:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.034 07:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:43.034 07:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:43.034 07:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:43.034 07:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.034 07:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:43.034 07:20:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:43.972 07:20:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:43.972 07:20:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:43.972 07:20:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:43.972 07:20:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.972 07:20:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:43.972 07:20:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:43.972 07:20:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:43.972 07:20:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.972 07:20:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:43.972 07:20:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:44.909 07:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:44.909 07:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:44.909 07:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:44.909 07:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.909 07:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:44.909 07:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:44.909 07:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:44.909 07:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.909 [2024-11-20 07:20:49.323780] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:44.909 [2024-11-20 07:20:49.323819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.909 [2024-11-20 07:20:49.323830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.909 [2024-11-20 07:20:49.323839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.909 [2024-11-20 07:20:49.323846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.909 [2024-11-20 07:20:49.323853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.909 [2024-11-20 07:20:49.323860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.909 [2024-11-20 07:20:49.323867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.909 [2024-11-20 07:20:49.323873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.909 [2024-11-20 07:20:49.323880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.909 [2024-11-20 07:20:49.323888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.910 [2024-11-20 07:20:49.323895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x939220 is same with the state(6) to be set 00:24:44.910 07:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:44.910 07:20:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:44.910 [2024-11-20 07:20:49.333802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x939220 (9): Bad file descriptor 00:24:44.910 [2024-11-20 07:20:49.343840] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:44.910 [2024-11-20 07:20:49.343853] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:44.910 [2024-11-20 07:20:49.343857] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:44.910 [2024-11-20 07:20:49.343862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:44.910 [2024-11-20 07:20:49.343882] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:45.845 07:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:45.845 07:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:45.845 07:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:45.845 07:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.845 07:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:45.845 07:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:45.845 07:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:45.845 [2024-11-20 07:20:50.351990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:45.845 [2024-11-20 07:20:50.352072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x939220 with addr=10.0.0.2, port=4420 00:24:45.845 [2024-11-20 07:20:50.352106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x939220 is same with the state(6) to be set 00:24:45.845 [2024-11-20 07:20:50.352172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x939220 (9): Bad file descriptor 00:24:45.845 [2024-11-20 07:20:50.353180] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:24:45.845 [2024-11-20 07:20:50.353252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:45.845 [2024-11-20 07:20:50.353277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:45.845 [2024-11-20 07:20:50.353299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:45.845 [2024-11-20 07:20:50.353322] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:45.845 [2024-11-20 07:20:50.353338] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:45.845 [2024-11-20 07:20:50.353352] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:45.845 [2024-11-20 07:20:50.353374] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:45.846 [2024-11-20 07:20:50.353390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:45.846 07:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.846 07:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:45.846 07:20:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:47.226 [2024-11-20 07:20:51.355916] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:47.226 [2024-11-20 07:20:51.355939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:47.226 [2024-11-20 07:20:51.355958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:47.226 [2024-11-20 07:20:51.355968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:47.226 [2024-11-20 07:20:51.355980] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:24:47.226 [2024-11-20 07:20:51.355987] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:47.226 [2024-11-20 07:20:51.355992] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:47.226 [2024-11-20 07:20:51.355996] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:47.226 [2024-11-20 07:20:51.356020] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:47.227 [2024-11-20 07:20:51.356043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.227 [2024-11-20 07:20:51.356053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.227 [2024-11-20 07:20:51.356065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.227 [2024-11-20 07:20:51.356071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.227 [2024-11-20 07:20:51.356079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.227 [2024-11-20 07:20:51.356086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.227 [2024-11-20 07:20:51.356094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.227 [2024-11-20 07:20:51.356101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.227 [2024-11-20 07:20:51.356108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.227 [2024-11-20 07:20:51.356114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.227 [2024-11-20 07:20:51.356121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:24:47.227 [2024-11-20 07:20:51.356509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x928900 (9): Bad file descriptor 00:24:47.227 [2024-11-20 07:20:51.357520] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:47.227 [2024-11-20 07:20:51.357532] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:24:47.227 07:20:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:47.227 07:20:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:47.227 07:20:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:47.227 07:20:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.227 07:20:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:47.227 07:20:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:47.227 07:20:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:47.227 07:20:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.227 07:20:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:47.227 07:20:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:47.227 07:20:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:47.227 07:20:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:47.227 07:20:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:47.227 07:20:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:47.227 07:20:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:47.227 07:20:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.227 07:20:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:47.227 07:20:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:47.227 07:20:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:47.227 07:20:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.227 07:20:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:47.227 07:20:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:48.165 07:20:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:48.165 07:20:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:48.165 07:20:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:48.165 07:20:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.165 07:20:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:48.165 07:20:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:48.165 07:20:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:48.165 07:20:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.165 07:20:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:48.165 07:20:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:49.102 [2024-11-20 07:20:53.414524] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:49.102 [2024-11-20 07:20:53.414541] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:49.102 [2024-11-20 07:20:53.414553] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:49.102 [2024-11-20 07:20:53.544968] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:49.102 07:20:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:49.102 07:20:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:49.103 07:20:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:49.103 07:20:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.103 07:20:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:49.103 07:20:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:49.103 07:20:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:49.103 07:20:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.363 07:20:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:49.363 07:20:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:49.363 [2024-11-20 07:20:53.767209] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:24:49.363 [2024-11-20 07:20:53.767773] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x92d820:1 started. 00:24:49.363 [2024-11-20 07:20:53.768826] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:49.363 [2024-11-20 07:20:53.768858] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:49.363 [2024-11-20 07:20:53.768875] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:49.363 [2024-11-20 07:20:53.768888] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:49.363 [2024-11-20 07:20:53.768895] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:49.363 [2024-11-20 07:20:53.772864] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x92d820 was disconnected and freed. delete nvme_qpair. 00:24:50.303 07:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:50.303 07:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:50.303 07:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:50.303 07:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.303 07:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:50.303 07:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:50.303 07:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:50.303 07:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.303 07:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:50.303 07:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:50.303 07:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1309961 00:24:50.303 07:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 1309961 ']' 00:24:50.303 07:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 1309961 00:24:50.303 07:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:24:50.303 07:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:50.303 07:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1309961 00:24:50.303 07:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:50.303 07:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:50.303 07:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1309961' 00:24:50.303 killing process with pid 1309961 00:24:50.303 07:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 1309961 00:24:50.303 07:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 1309961 00:24:50.563 07:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:50.563 07:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:50.563 07:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:24:50.563 07:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:50.563 07:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:24:50.563 07:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:50.563 07:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:50.563 rmmod nvme_tcp 00:24:50.563 rmmod nvme_fabrics 00:24:50.563 rmmod nvme_keyring 00:24:50.563 07:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:50.563 07:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:24:50.563 07:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:24:50.563 07:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 1309933 ']' 00:24:50.563 07:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 1309933 00:24:50.563 07:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 1309933 ']' 00:24:50.563 07:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 1309933 00:24:50.563 07:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:24:50.563 07:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:50.563 07:20:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1309933 00:24:50.563 07:20:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:50.563 07:20:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:50.563 07:20:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1309933' 00:24:50.563 killing process with pid 1309933 00:24:50.563 07:20:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 1309933 00:24:50.563 07:20:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 1309933 00:24:50.823 07:20:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:50.823 07:20:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:50.824 07:20:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:50.824 07:20:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:24:50.824 07:20:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:24:50.824 07:20:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:50.824 07:20:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:24:50.824 07:20:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:50.824 07:20:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:50.824 07:20:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.824 07:20:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:50.824 07:20:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.733 07:20:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:52.733 00:24:52.733 real 0m21.465s 00:24:52.733 user 0m26.650s 00:24:52.733 sys 0m5.929s 00:24:52.733 07:20:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:52.733 07:20:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:52.733 ************************************ 00:24:52.733 END TEST nvmf_discovery_remove_ifc 00:24:52.733 ************************************ 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.994 ************************************ 00:24:52.994 START TEST nvmf_identify_kernel_target 00:24:52.994 ************************************ 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:52.994 * Looking for test storage... 00:24:52.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:52.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.994 --rc genhtml_branch_coverage=1 00:24:52.994 --rc genhtml_function_coverage=1 00:24:52.994 --rc genhtml_legend=1 00:24:52.994 --rc geninfo_all_blocks=1 00:24:52.994 --rc geninfo_unexecuted_blocks=1 00:24:52.994 00:24:52.994 ' 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:52.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.994 --rc genhtml_branch_coverage=1 00:24:52.994 --rc genhtml_function_coverage=1 00:24:52.994 --rc genhtml_legend=1 00:24:52.994 --rc geninfo_all_blocks=1 00:24:52.994 --rc geninfo_unexecuted_blocks=1 00:24:52.994 00:24:52.994 ' 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:52.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.994 --rc genhtml_branch_coverage=1 00:24:52.994 --rc genhtml_function_coverage=1 00:24:52.994 --rc genhtml_legend=1 00:24:52.994 --rc geninfo_all_blocks=1 00:24:52.994 --rc geninfo_unexecuted_blocks=1 00:24:52.994 00:24:52.994 ' 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:52.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.994 --rc genhtml_branch_coverage=1 00:24:52.994 --rc genhtml_function_coverage=1 00:24:52.994 --rc genhtml_legend=1 00:24:52.994 --rc geninfo_all_blocks=1 00:24:52.994 --rc geninfo_unexecuted_blocks=1 00:24:52.994 00:24:52.994 ' 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:52.994 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:52.995 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:52.995 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:52.995 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:52.995 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:52.995 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:52.995 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:52.995 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:52.995 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:52.995 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:52.995 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:52.995 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:52.995 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:52.995 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:52.995 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:52.995 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:24:52.995 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:52.995 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:52.995 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:52.995 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.995 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.995 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.995 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:52.995 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.995 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:24:52.995 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:52.995 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:52.995 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:52.995 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:53.255 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:53.255 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:53.255 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:53.255 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:53.255 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:53.255 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:53.255 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:53.255 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:53.255 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:53.255 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:53.255 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:53.255 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:53.255 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.255 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:53.255 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.255 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:53.255 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:53.255 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:24:53.255 07:20:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:59.829 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:59.829 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:24:59.829 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:59.829 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:59.829 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:59.829 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:59.829 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:59.829 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:24:59.829 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:59.829 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:24:59.829 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:59.830 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:59.830 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:59.830 Found net devices under 0000:86:00.0: cvl_0_0 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:59.830 Found net devices under 0000:86:00.1: cvl_0_1 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:59.830 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:59.831 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:59.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.387 ms 00:24:59.831 00:24:59.831 --- 10.0.0.2 ping statistics --- 00:24:59.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.831 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:59.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:59.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:24:59.831 00:24:59.831 --- 10.0.0.1 ping statistics --- 00:24:59.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.831 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:24:59.831 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:59.832 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:59.832 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:59.832 07:21:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:01.738 Waiting for block devices as requested 00:25:01.738 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:01.997 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:01.997 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:01.997 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:02.257 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:02.257 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:02.257 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:02.516 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:02.516 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:02.516 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:02.516 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:02.775 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:02.775 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:02.775 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:03.034 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:03.034 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:03.034 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:03.292 07:21:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:03.292 07:21:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:03.292 07:21:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:03.292 07:21:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:25:03.292 07:21:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:03.292 07:21:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:03.292 07:21:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:03.292 07:21:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:03.292 07:21:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:03.292 No valid GPT data, bailing 00:25:03.292 07:21:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:03.292 07:21:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:25:03.292 07:21:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:25:03.292 07:21:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:03.292 07:21:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:03.293 07:21:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:03.293 07:21:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:03.293 07:21:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:03.293 07:21:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:03.293 07:21:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:25:03.293 07:21:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:03.293 07:21:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:25:03.293 07:21:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:03.293 07:21:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:25:03.293 07:21:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:25:03.293 07:21:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:25:03.293 07:21:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:03.293 07:21:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:03.293 00:25:03.293 Discovery Log Number of Records 2, Generation counter 2 00:25:03.293 =====Discovery Log Entry 0====== 00:25:03.293 trtype: tcp 00:25:03.293 adrfam: ipv4 00:25:03.293 subtype: current discovery subsystem 00:25:03.293 treq: not specified, sq flow control disable supported 00:25:03.293 portid: 1 00:25:03.293 trsvcid: 4420 00:25:03.293 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:03.293 traddr: 10.0.0.1 00:25:03.293 eflags: none 00:25:03.293 sectype: none 00:25:03.293 =====Discovery Log Entry 1====== 00:25:03.293 trtype: tcp 00:25:03.293 adrfam: ipv4 00:25:03.293 subtype: nvme subsystem 00:25:03.293 treq: not specified, sq flow control disable supported 00:25:03.293 portid: 1 00:25:03.293 trsvcid: 4420 00:25:03.293 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:03.293 traddr: 10.0.0.1 00:25:03.293 eflags: none 00:25:03.293 sectype: none 00:25:03.293 07:21:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:03.293 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:03.552 ===================================================== 00:25:03.552 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:03.552 ===================================================== 00:25:03.552 Controller Capabilities/Features 00:25:03.552 ================================ 00:25:03.552 Vendor ID: 0000 00:25:03.552 Subsystem Vendor ID: 0000 00:25:03.552 Serial Number: 913a388527326cca3e02 00:25:03.552 Model Number: Linux 00:25:03.552 Firmware Version: 6.8.9-20 00:25:03.552 Recommended Arb Burst: 0 00:25:03.552 IEEE OUI Identifier: 00 00 00 00:25:03.552 Multi-path I/O 00:25:03.552 May have multiple subsystem ports: No 00:25:03.552 May have multiple controllers: No 00:25:03.552 Associated with SR-IOV VF: No 00:25:03.552 Max Data Transfer Size: Unlimited 00:25:03.552 Max Number of Namespaces: 0 00:25:03.552 Max Number of I/O Queues: 1024 00:25:03.552 NVMe Specification Version (VS): 1.3 00:25:03.552 NVMe Specification Version (Identify): 1.3 00:25:03.552 Maximum Queue Entries: 1024 00:25:03.552 Contiguous Queues Required: No 00:25:03.552 Arbitration Mechanisms Supported 00:25:03.552 Weighted Round Robin: Not Supported 00:25:03.552 Vendor Specific: Not Supported 00:25:03.552 Reset Timeout: 7500 ms 00:25:03.552 Doorbell Stride: 4 bytes 00:25:03.552 NVM Subsystem Reset: Not Supported 00:25:03.552 Command Sets Supported 00:25:03.552 NVM Command Set: Supported 00:25:03.552 Boot Partition: Not Supported 00:25:03.552 Memory Page Size Minimum: 4096 bytes 00:25:03.552 Memory Page Size Maximum: 4096 bytes 00:25:03.552 Persistent Memory Region: Not Supported 00:25:03.552 Optional Asynchronous Events Supported 00:25:03.552 Namespace Attribute Notices: Not Supported 00:25:03.552 Firmware Activation Notices: Not Supported 00:25:03.552 ANA Change Notices: Not Supported 00:25:03.552 PLE Aggregate Log Change Notices: Not Supported 00:25:03.552 LBA Status Info Alert Notices: Not Supported 00:25:03.552 EGE Aggregate Log Change Notices: Not Supported 00:25:03.552 Normal NVM Subsystem Shutdown event: Not Supported 00:25:03.552 Zone Descriptor Change Notices: Not Supported 00:25:03.552 Discovery Log Change Notices: Supported 00:25:03.552 Controller Attributes 00:25:03.552 128-bit Host Identifier: Not Supported 00:25:03.552 Non-Operational Permissive Mode: Not Supported 00:25:03.552 NVM Sets: Not Supported 00:25:03.552 Read Recovery Levels: Not Supported 00:25:03.552 Endurance Groups: Not Supported 00:25:03.552 Predictable Latency Mode: Not Supported 00:25:03.552 Traffic Based Keep ALive: Not Supported 00:25:03.552 Namespace Granularity: Not Supported 00:25:03.552 SQ Associations: Not Supported 00:25:03.552 UUID List: Not Supported 00:25:03.552 Multi-Domain Subsystem: Not Supported 00:25:03.552 Fixed Capacity Management: Not Supported 00:25:03.552 Variable Capacity Management: Not Supported 00:25:03.552 Delete Endurance Group: Not Supported 00:25:03.552 Delete NVM Set: Not Supported 00:25:03.552 Extended LBA Formats Supported: Not Supported 00:25:03.552 Flexible Data Placement Supported: Not Supported 00:25:03.552 00:25:03.552 Controller Memory Buffer Support 00:25:03.552 ================================ 00:25:03.552 Supported: No 00:25:03.552 00:25:03.552 Persistent Memory Region Support 00:25:03.552 ================================ 00:25:03.552 Supported: No 00:25:03.552 00:25:03.552 Admin Command Set Attributes 00:25:03.552 ============================ 00:25:03.552 Security Send/Receive: Not Supported 00:25:03.552 Format NVM: Not Supported 00:25:03.552 Firmware Activate/Download: Not Supported 00:25:03.552 Namespace Management: Not Supported 00:25:03.552 Device Self-Test: Not Supported 00:25:03.552 Directives: Not Supported 00:25:03.552 NVMe-MI: Not Supported 00:25:03.552 Virtualization Management: Not Supported 00:25:03.552 Doorbell Buffer Config: Not Supported 00:25:03.552 Get LBA Status Capability: Not Supported 00:25:03.552 Command & Feature Lockdown Capability: Not Supported 00:25:03.552 Abort Command Limit: 1 00:25:03.552 Async Event Request Limit: 1 00:25:03.552 Number of Firmware Slots: N/A 00:25:03.552 Firmware Slot 1 Read-Only: N/A 00:25:03.552 Firmware Activation Without Reset: N/A 00:25:03.553 Multiple Update Detection Support: N/A 00:25:03.553 Firmware Update Granularity: No Information Provided 00:25:03.553 Per-Namespace SMART Log: No 00:25:03.553 Asymmetric Namespace Access Log Page: Not Supported 00:25:03.553 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:03.553 Command Effects Log Page: Not Supported 00:25:03.553 Get Log Page Extended Data: Supported 00:25:03.553 Telemetry Log Pages: Not Supported 00:25:03.553 Persistent Event Log Pages: Not Supported 00:25:03.553 Supported Log Pages Log Page: May Support 00:25:03.553 Commands Supported & Effects Log Page: Not Supported 00:25:03.553 Feature Identifiers & Effects Log Page:May Support 00:25:03.553 NVMe-MI Commands & Effects Log Page: May Support 00:25:03.553 Data Area 4 for Telemetry Log: Not Supported 00:25:03.553 Error Log Page Entries Supported: 1 00:25:03.553 Keep Alive: Not Supported 00:25:03.553 00:25:03.553 NVM Command Set Attributes 00:25:03.553 ========================== 00:25:03.553 Submission Queue Entry Size 00:25:03.553 Max: 1 00:25:03.553 Min: 1 00:25:03.553 Completion Queue Entry Size 00:25:03.553 Max: 1 00:25:03.553 Min: 1 00:25:03.553 Number of Namespaces: 0 00:25:03.553 Compare Command: Not Supported 00:25:03.553 Write Uncorrectable Command: Not Supported 00:25:03.553 Dataset Management Command: Not Supported 00:25:03.553 Write Zeroes Command: Not Supported 00:25:03.553 Set Features Save Field: Not Supported 00:25:03.553 Reservations: Not Supported 00:25:03.553 Timestamp: Not Supported 00:25:03.553 Copy: Not Supported 00:25:03.553 Volatile Write Cache: Not Present 00:25:03.553 Atomic Write Unit (Normal): 1 00:25:03.553 Atomic Write Unit (PFail): 1 00:25:03.553 Atomic Compare & Write Unit: 1 00:25:03.553 Fused Compare & Write: Not Supported 00:25:03.553 Scatter-Gather List 00:25:03.553 SGL Command Set: Supported 00:25:03.553 SGL Keyed: Not Supported 00:25:03.553 SGL Bit Bucket Descriptor: Not Supported 00:25:03.553 SGL Metadata Pointer: Not Supported 00:25:03.553 Oversized SGL: Not Supported 00:25:03.553 SGL Metadata Address: Not Supported 00:25:03.553 SGL Offset: Supported 00:25:03.553 Transport SGL Data Block: Not Supported 00:25:03.553 Replay Protected Memory Block: Not Supported 00:25:03.553 00:25:03.553 Firmware Slot Information 00:25:03.553 ========================= 00:25:03.553 Active slot: 0 00:25:03.553 00:25:03.553 00:25:03.553 Error Log 00:25:03.553 ========= 00:25:03.553 00:25:03.553 Active Namespaces 00:25:03.553 ================= 00:25:03.553 Discovery Log Page 00:25:03.553 ================== 00:25:03.553 Generation Counter: 2 00:25:03.553 Number of Records: 2 00:25:03.553 Record Format: 0 00:25:03.553 00:25:03.553 Discovery Log Entry 0 00:25:03.553 ---------------------- 00:25:03.553 Transport Type: 3 (TCP) 00:25:03.553 Address Family: 1 (IPv4) 00:25:03.553 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:03.553 Entry Flags: 00:25:03.553 Duplicate Returned Information: 0 00:25:03.553 Explicit Persistent Connection Support for Discovery: 0 00:25:03.553 Transport Requirements: 00:25:03.553 Secure Channel: Not Specified 00:25:03.553 Port ID: 1 (0x0001) 00:25:03.553 Controller ID: 65535 (0xffff) 00:25:03.553 Admin Max SQ Size: 32 00:25:03.553 Transport Service Identifier: 4420 00:25:03.553 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:03.553 Transport Address: 10.0.0.1 00:25:03.553 Discovery Log Entry 1 00:25:03.553 ---------------------- 00:25:03.553 Transport Type: 3 (TCP) 00:25:03.553 Address Family: 1 (IPv4) 00:25:03.553 Subsystem Type: 2 (NVM Subsystem) 00:25:03.553 Entry Flags: 00:25:03.553 Duplicate Returned Information: 0 00:25:03.553 Explicit Persistent Connection Support for Discovery: 0 00:25:03.553 Transport Requirements: 00:25:03.553 Secure Channel: Not Specified 00:25:03.553 Port ID: 1 (0x0001) 00:25:03.553 Controller ID: 65535 (0xffff) 00:25:03.553 Admin Max SQ Size: 32 00:25:03.553 Transport Service Identifier: 4420 00:25:03.553 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:03.553 Transport Address: 10.0.0.1 00:25:03.553 07:21:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:03.553 get_feature(0x01) failed 00:25:03.553 get_feature(0x02) failed 00:25:03.553 get_feature(0x04) failed 00:25:03.553 ===================================================== 00:25:03.553 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:03.553 ===================================================== 00:25:03.553 Controller Capabilities/Features 00:25:03.553 ================================ 00:25:03.553 Vendor ID: 0000 00:25:03.553 Subsystem Vendor ID: 0000 00:25:03.553 Serial Number: 25fcf2afd7bcb2963332 00:25:03.553 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:03.553 Firmware Version: 6.8.9-20 00:25:03.553 Recommended Arb Burst: 6 00:25:03.553 IEEE OUI Identifier: 00 00 00 00:25:03.553 Multi-path I/O 00:25:03.553 May have multiple subsystem ports: Yes 00:25:03.553 May have multiple controllers: Yes 00:25:03.553 Associated with SR-IOV VF: No 00:25:03.553 Max Data Transfer Size: Unlimited 00:25:03.553 Max Number of Namespaces: 1024 00:25:03.553 Max Number of I/O Queues: 128 00:25:03.553 NVMe Specification Version (VS): 1.3 00:25:03.553 NVMe Specification Version (Identify): 1.3 00:25:03.553 Maximum Queue Entries: 1024 00:25:03.553 Contiguous Queues Required: No 00:25:03.553 Arbitration Mechanisms Supported 00:25:03.553 Weighted Round Robin: Not Supported 00:25:03.553 Vendor Specific: Not Supported 00:25:03.553 Reset Timeout: 7500 ms 00:25:03.553 Doorbell Stride: 4 bytes 00:25:03.553 NVM Subsystem Reset: Not Supported 00:25:03.553 Command Sets Supported 00:25:03.553 NVM Command Set: Supported 00:25:03.554 Boot Partition: Not Supported 00:25:03.554 Memory Page Size Minimum: 4096 bytes 00:25:03.554 Memory Page Size Maximum: 4096 bytes 00:25:03.554 Persistent Memory Region: Not Supported 00:25:03.554 Optional Asynchronous Events Supported 00:25:03.554 Namespace Attribute Notices: Supported 00:25:03.554 Firmware Activation Notices: Not Supported 00:25:03.554 ANA Change Notices: Supported 00:25:03.554 PLE Aggregate Log Change Notices: Not Supported 00:25:03.554 LBA Status Info Alert Notices: Not Supported 00:25:03.554 EGE Aggregate Log Change Notices: Not Supported 00:25:03.554 Normal NVM Subsystem Shutdown event: Not Supported 00:25:03.554 Zone Descriptor Change Notices: Not Supported 00:25:03.554 Discovery Log Change Notices: Not Supported 00:25:03.554 Controller Attributes 00:25:03.554 128-bit Host Identifier: Supported 00:25:03.554 Non-Operational Permissive Mode: Not Supported 00:25:03.554 NVM Sets: Not Supported 00:25:03.554 Read Recovery Levels: Not Supported 00:25:03.554 Endurance Groups: Not Supported 00:25:03.554 Predictable Latency Mode: Not Supported 00:25:03.554 Traffic Based Keep ALive: Supported 00:25:03.554 Namespace Granularity: Not Supported 00:25:03.554 SQ Associations: Not Supported 00:25:03.554 UUID List: Not Supported 00:25:03.554 Multi-Domain Subsystem: Not Supported 00:25:03.554 Fixed Capacity Management: Not Supported 00:25:03.554 Variable Capacity Management: Not Supported 00:25:03.554 Delete Endurance Group: Not Supported 00:25:03.554 Delete NVM Set: Not Supported 00:25:03.554 Extended LBA Formats Supported: Not Supported 00:25:03.554 Flexible Data Placement Supported: Not Supported 00:25:03.554 00:25:03.554 Controller Memory Buffer Support 00:25:03.554 ================================ 00:25:03.554 Supported: No 00:25:03.554 00:25:03.554 Persistent Memory Region Support 00:25:03.554 ================================ 00:25:03.554 Supported: No 00:25:03.554 00:25:03.554 Admin Command Set Attributes 00:25:03.554 ============================ 00:25:03.554 Security Send/Receive: Not Supported 00:25:03.554 Format NVM: Not Supported 00:25:03.554 Firmware Activate/Download: Not Supported 00:25:03.554 Namespace Management: Not Supported 00:25:03.554 Device Self-Test: Not Supported 00:25:03.554 Directives: Not Supported 00:25:03.554 NVMe-MI: Not Supported 00:25:03.554 Virtualization Management: Not Supported 00:25:03.554 Doorbell Buffer Config: Not Supported 00:25:03.554 Get LBA Status Capability: Not Supported 00:25:03.554 Command & Feature Lockdown Capability: Not Supported 00:25:03.554 Abort Command Limit: 4 00:25:03.554 Async Event Request Limit: 4 00:25:03.554 Number of Firmware Slots: N/A 00:25:03.554 Firmware Slot 1 Read-Only: N/A 00:25:03.554 Firmware Activation Without Reset: N/A 00:25:03.554 Multiple Update Detection Support: N/A 00:25:03.554 Firmware Update Granularity: No Information Provided 00:25:03.554 Per-Namespace SMART Log: Yes 00:25:03.554 Asymmetric Namespace Access Log Page: Supported 00:25:03.554 ANA Transition Time : 10 sec 00:25:03.554 00:25:03.554 Asymmetric Namespace Access Capabilities 00:25:03.554 ANA Optimized State : Supported 00:25:03.554 ANA Non-Optimized State : Supported 00:25:03.554 ANA Inaccessible State : Supported 00:25:03.554 ANA Persistent Loss State : Supported 00:25:03.554 ANA Change State : Supported 00:25:03.554 ANAGRPID is not changed : No 00:25:03.554 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:03.554 00:25:03.554 ANA Group Identifier Maximum : 128 00:25:03.554 Number of ANA Group Identifiers : 128 00:25:03.554 Max Number of Allowed Namespaces : 1024 00:25:03.554 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:03.554 Command Effects Log Page: Supported 00:25:03.554 Get Log Page Extended Data: Supported 00:25:03.554 Telemetry Log Pages: Not Supported 00:25:03.554 Persistent Event Log Pages: Not Supported 00:25:03.554 Supported Log Pages Log Page: May Support 00:25:03.554 Commands Supported & Effects Log Page: Not Supported 00:25:03.554 Feature Identifiers & Effects Log Page:May Support 00:25:03.554 NVMe-MI Commands & Effects Log Page: May Support 00:25:03.554 Data Area 4 for Telemetry Log: Not Supported 00:25:03.554 Error Log Page Entries Supported: 128 00:25:03.554 Keep Alive: Supported 00:25:03.554 Keep Alive Granularity: 1000 ms 00:25:03.554 00:25:03.554 NVM Command Set Attributes 00:25:03.554 ========================== 00:25:03.554 Submission Queue Entry Size 00:25:03.554 Max: 64 00:25:03.554 Min: 64 00:25:03.554 Completion Queue Entry Size 00:25:03.554 Max: 16 00:25:03.554 Min: 16 00:25:03.554 Number of Namespaces: 1024 00:25:03.554 Compare Command: Not Supported 00:25:03.554 Write Uncorrectable Command: Not Supported 00:25:03.554 Dataset Management Command: Supported 00:25:03.554 Write Zeroes Command: Supported 00:25:03.554 Set Features Save Field: Not Supported 00:25:03.554 Reservations: Not Supported 00:25:03.554 Timestamp: Not Supported 00:25:03.554 Copy: Not Supported 00:25:03.554 Volatile Write Cache: Present 00:25:03.554 Atomic Write Unit (Normal): 1 00:25:03.554 Atomic Write Unit (PFail): 1 00:25:03.554 Atomic Compare & Write Unit: 1 00:25:03.554 Fused Compare & Write: Not Supported 00:25:03.554 Scatter-Gather List 00:25:03.554 SGL Command Set: Supported 00:25:03.554 SGL Keyed: Not Supported 00:25:03.554 SGL Bit Bucket Descriptor: Not Supported 00:25:03.554 SGL Metadata Pointer: Not Supported 00:25:03.554 Oversized SGL: Not Supported 00:25:03.554 SGL Metadata Address: Not Supported 00:25:03.554 SGL Offset: Supported 00:25:03.554 Transport SGL Data Block: Not Supported 00:25:03.554 Replay Protected Memory Block: Not Supported 00:25:03.554 00:25:03.554 Firmware Slot Information 00:25:03.554 ========================= 00:25:03.554 Active slot: 0 00:25:03.554 00:25:03.554 Asymmetric Namespace Access 00:25:03.554 =========================== 00:25:03.554 Change Count : 0 00:25:03.554 Number of ANA Group Descriptors : 1 00:25:03.554 ANA Group Descriptor : 0 00:25:03.554 ANA Group ID : 1 00:25:03.554 Number of NSID Values : 1 00:25:03.555 Change Count : 0 00:25:03.555 ANA State : 1 00:25:03.555 Namespace Identifier : 1 00:25:03.555 00:25:03.555 Commands Supported and Effects 00:25:03.555 ============================== 00:25:03.555 Admin Commands 00:25:03.555 -------------- 00:25:03.555 Get Log Page (02h): Supported 00:25:03.555 Identify (06h): Supported 00:25:03.555 Abort (08h): Supported 00:25:03.555 Set Features (09h): Supported 00:25:03.555 Get Features (0Ah): Supported 00:25:03.555 Asynchronous Event Request (0Ch): Supported 00:25:03.555 Keep Alive (18h): Supported 00:25:03.555 I/O Commands 00:25:03.555 ------------ 00:25:03.555 Flush (00h): Supported 00:25:03.555 Write (01h): Supported LBA-Change 00:25:03.555 Read (02h): Supported 00:25:03.555 Write Zeroes (08h): Supported LBA-Change 00:25:03.555 Dataset Management (09h): Supported 00:25:03.555 00:25:03.555 Error Log 00:25:03.555 ========= 00:25:03.555 Entry: 0 00:25:03.555 Error Count: 0x3 00:25:03.555 Submission Queue Id: 0x0 00:25:03.555 Command Id: 0x5 00:25:03.555 Phase Bit: 0 00:25:03.555 Status Code: 0x2 00:25:03.555 Status Code Type: 0x0 00:25:03.555 Do Not Retry: 1 00:25:03.555 Error Location: 0x28 00:25:03.555 LBA: 0x0 00:25:03.555 Namespace: 0x0 00:25:03.555 Vendor Log Page: 0x0 00:25:03.555 ----------- 00:25:03.555 Entry: 1 00:25:03.555 Error Count: 0x2 00:25:03.555 Submission Queue Id: 0x0 00:25:03.555 Command Id: 0x5 00:25:03.555 Phase Bit: 0 00:25:03.555 Status Code: 0x2 00:25:03.555 Status Code Type: 0x0 00:25:03.555 Do Not Retry: 1 00:25:03.555 Error Location: 0x28 00:25:03.555 LBA: 0x0 00:25:03.555 Namespace: 0x0 00:25:03.555 Vendor Log Page: 0x0 00:25:03.555 ----------- 00:25:03.555 Entry: 2 00:25:03.555 Error Count: 0x1 00:25:03.555 Submission Queue Id: 0x0 00:25:03.555 Command Id: 0x4 00:25:03.555 Phase Bit: 0 00:25:03.555 Status Code: 0x2 00:25:03.555 Status Code Type: 0x0 00:25:03.555 Do Not Retry: 1 00:25:03.555 Error Location: 0x28 00:25:03.555 LBA: 0x0 00:25:03.555 Namespace: 0x0 00:25:03.555 Vendor Log Page: 0x0 00:25:03.555 00:25:03.555 Number of Queues 00:25:03.555 ================ 00:25:03.555 Number of I/O Submission Queues: 128 00:25:03.555 Number of I/O Completion Queues: 128 00:25:03.555 00:25:03.555 ZNS Specific Controller Data 00:25:03.555 ============================ 00:25:03.555 Zone Append Size Limit: 0 00:25:03.555 00:25:03.555 00:25:03.555 Active Namespaces 00:25:03.555 ================= 00:25:03.555 get_feature(0x05) failed 00:25:03.555 Namespace ID:1 00:25:03.555 Command Set Identifier: NVM (00h) 00:25:03.555 Deallocate: Supported 00:25:03.555 Deallocated/Unwritten Error: Not Supported 00:25:03.555 Deallocated Read Value: Unknown 00:25:03.555 Deallocate in Write Zeroes: Not Supported 00:25:03.555 Deallocated Guard Field: 0xFFFF 00:25:03.555 Flush: Supported 00:25:03.555 Reservation: Not Supported 00:25:03.555 Namespace Sharing Capabilities: Multiple Controllers 00:25:03.555 Size (in LBAs): 1953525168 (931GiB) 00:25:03.555 Capacity (in LBAs): 1953525168 (931GiB) 00:25:03.555 Utilization (in LBAs): 1953525168 (931GiB) 00:25:03.555 UUID: 382eedb9-f2f4-4133-9784-b43afd34b125 00:25:03.555 Thin Provisioning: Not Supported 00:25:03.555 Per-NS Atomic Units: Yes 00:25:03.555 Atomic Boundary Size (Normal): 0 00:25:03.555 Atomic Boundary Size (PFail): 0 00:25:03.555 Atomic Boundary Offset: 0 00:25:03.555 NGUID/EUI64 Never Reused: No 00:25:03.555 ANA group ID: 1 00:25:03.555 Namespace Write Protected: No 00:25:03.555 Number of LBA Formats: 1 00:25:03.555 Current LBA Format: LBA Format #00 00:25:03.555 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:03.555 00:25:03.555 07:21:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:03.555 07:21:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:03.555 07:21:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:25:03.555 07:21:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:03.555 07:21:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:25:03.555 07:21:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:03.555 07:21:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:03.555 rmmod nvme_tcp 00:25:03.555 rmmod nvme_fabrics 00:25:03.555 07:21:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:03.555 07:21:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:25:03.555 07:21:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:25:03.555 07:21:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:25:03.555 07:21:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:03.555 07:21:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:03.555 07:21:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:03.555 07:21:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:25:03.555 07:21:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:25:03.555 07:21:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:03.555 07:21:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:25:03.555 07:21:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:03.555 07:21:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:03.555 07:21:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.555 07:21:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:03.555 07:21:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:06.086 07:21:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:06.086 07:21:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:06.086 07:21:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:06.086 07:21:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:25:06.086 07:21:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:06.086 07:21:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:06.086 07:21:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:06.086 07:21:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:06.086 07:21:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:06.086 07:21:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:06.086 07:21:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:08.622 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:08.622 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:08.622 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:08.623 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:08.623 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:08.623 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:08.623 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:08.623 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:08.623 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:08.623 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:08.623 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:08.623 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:08.623 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:08.623 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:08.623 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:08.623 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:09.559 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:09.559 00:25:09.559 real 0m16.716s 00:25:09.559 user 0m4.325s 00:25:09.559 sys 0m8.748s 00:25:09.559 07:21:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:09.559 07:21:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:09.559 ************************************ 00:25:09.559 END TEST nvmf_identify_kernel_target 00:25:09.559 ************************************ 00:25:09.559 07:21:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:09.559 07:21:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:09.559 07:21:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:09.559 07:21:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.819 ************************************ 00:25:09.819 START TEST nvmf_auth_host 00:25:09.819 ************************************ 00:25:09.819 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:09.819 * Looking for test storage... 00:25:09.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:09.819 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:09.819 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:25:09.819 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:09.819 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:09.819 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:09.819 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:09.819 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:09.819 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:09.819 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:09.819 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:09.819 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:09.819 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:09.819 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:09.819 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:09.819 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:09.819 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:25:09.819 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:25:09.819 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:09.819 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:09.819 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:25:09.819 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:25:09.819 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:09.819 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:25:09.819 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:09.819 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:25:09.819 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:25:09.819 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:09.819 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:25:09.819 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:09.819 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:09.819 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:09.819 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:25:09.819 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:09.819 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:09.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.819 --rc genhtml_branch_coverage=1 00:25:09.819 --rc genhtml_function_coverage=1 00:25:09.819 --rc genhtml_legend=1 00:25:09.819 --rc geninfo_all_blocks=1 00:25:09.819 --rc geninfo_unexecuted_blocks=1 00:25:09.819 00:25:09.819 ' 00:25:09.819 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:09.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.819 --rc genhtml_branch_coverage=1 00:25:09.819 --rc genhtml_function_coverage=1 00:25:09.819 --rc genhtml_legend=1 00:25:09.819 --rc geninfo_all_blocks=1 00:25:09.819 --rc geninfo_unexecuted_blocks=1 00:25:09.819 00:25:09.819 ' 00:25:09.819 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:09.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.819 --rc genhtml_branch_coverage=1 00:25:09.819 --rc genhtml_function_coverage=1 00:25:09.819 --rc genhtml_legend=1 00:25:09.819 --rc geninfo_all_blocks=1 00:25:09.819 --rc geninfo_unexecuted_blocks=1 00:25:09.819 00:25:09.819 ' 00:25:09.819 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:09.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.819 --rc genhtml_branch_coverage=1 00:25:09.819 --rc genhtml_function_coverage=1 00:25:09.819 --rc genhtml_legend=1 00:25:09.819 --rc geninfo_all_blocks=1 00:25:09.819 --rc geninfo_unexecuted_blocks=1 00:25:09.819 00:25:09.819 ' 00:25:09.819 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:09.819 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:09.819 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:09.820 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:09.820 07:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.396 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:16.396 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:16.396 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:16.396 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:16.396 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:16.396 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:16.396 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:16.396 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:16.396 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:16.396 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:25:16.396 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:16.396 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:25:16.396 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:16.396 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:25:16.396 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:16.396 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:16.396 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:16.396 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:16.396 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:16.396 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:16.396 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:16.396 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:16.396 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:16.397 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:16.397 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:16.397 Found net devices under 0000:86:00.0: cvl_0_0 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:16.397 Found net devices under 0000:86:00.1: cvl_0_1 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:16.397 07:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:16.397 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:16.397 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:16.397 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:16.397 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:16.397 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:16.397 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:16.397 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:16.397 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:16.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:16.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.467 ms 00:25:16.397 00:25:16.397 --- 10.0.0.2 ping statistics --- 00:25:16.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.397 rtt min/avg/max/mdev = 0.467/0.467/0.467/0.000 ms 00:25:16.397 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:16.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:16.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:25:16.397 00:25:16.397 --- 10.0.0.1 ping statistics --- 00:25:16.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.397 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:25:16.397 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:16.397 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:25:16.397 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:16.397 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:16.397 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:16.397 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:16.397 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:16.397 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:16.397 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:16.397 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:16.397 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:16.397 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=1322531 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 1322531 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 1322531 ']' 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2946a44e874c5c7807510234a2354ebd 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Skf 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2946a44e874c5c7807510234a2354ebd 0 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2946a44e874c5c7807510234a2354ebd 0 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2946a44e874c5c7807510234a2354ebd 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Skf 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Skf 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Skf 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=19a075dd98c3a4de9f302c36e512836b86d7d6e2454f3862aae5141abb87b946 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Mua 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 19a075dd98c3a4de9f302c36e512836b86d7d6e2454f3862aae5141abb87b946 3 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 19a075dd98c3a4de9f302c36e512836b86d7d6e2454f3862aae5141abb87b946 3 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=19a075dd98c3a4de9f302c36e512836b86d7d6e2454f3862aae5141abb87b946 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Mua 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Mua 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Mua 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=43cce2f5ba07b84178403e94c8049335edfad0e5684b7a2e 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.y2a 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 43cce2f5ba07b84178403e94c8049335edfad0e5684b7a2e 0 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 43cce2f5ba07b84178403e94c8049335edfad0e5684b7a2e 0 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=43cce2f5ba07b84178403e94c8049335edfad0e5684b7a2e 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.y2a 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.y2a 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.y2a 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2ed75192a28369bf52637b4bdf4ee116a0bb6dd33e21d6ff 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.eJy 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2ed75192a28369bf52637b4bdf4ee116a0bb6dd33e21d6ff 2 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2ed75192a28369bf52637b4bdf4ee116a0bb6dd33e21d6ff 2 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:16.398 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2ed75192a28369bf52637b4bdf4ee116a0bb6dd33e21d6ff 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.eJy 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.eJy 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.eJy 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d60066a1d766dbccf96cfa9e31975704 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.aSz 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d60066a1d766dbccf96cfa9e31975704 1 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d60066a1d766dbccf96cfa9e31975704 1 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d60066a1d766dbccf96cfa9e31975704 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.aSz 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.aSz 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.aSz 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e68740ade04a08baaa04b28de6820f7c 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.zb4 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e68740ade04a08baaa04b28de6820f7c 1 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e68740ade04a08baaa04b28de6820f7c 1 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e68740ade04a08baaa04b28de6820f7c 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.zb4 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.zb4 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.zb4 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=289243fab6a0867db9dd7a92242e6daa458013059505a647 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.CN7 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 289243fab6a0867db9dd7a92242e6daa458013059505a647 2 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 289243fab6a0867db9dd7a92242e6daa458013059505a647 2 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=289243fab6a0867db9dd7a92242e6daa458013059505a647 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.CN7 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.CN7 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.CN7 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:16.399 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:16.659 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:16.659 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=645be5f8174d7444f096f1eaaf357550 00:25:16.659 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:16.659 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.fh3 00:25:16.659 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 645be5f8174d7444f096f1eaaf357550 0 00:25:16.659 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 645be5f8174d7444f096f1eaaf357550 0 00:25:16.659 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:16.659 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:16.659 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=645be5f8174d7444f096f1eaaf357550 00:25:16.659 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:16.659 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:16.659 07:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.fh3 00:25:16.659 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.fh3 00:25:16.659 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.fh3 00:25:16.659 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:16.659 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:16.659 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:16.659 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:16.659 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:16.659 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:16.659 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:16.659 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9d5ee628369a2c222ad292f0b5738ba336ebbc0486d5c28fdc07e84976ec4961 00:25:16.659 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:16.659 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.GMF 00:25:16.659 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9d5ee628369a2c222ad292f0b5738ba336ebbc0486d5c28fdc07e84976ec4961 3 00:25:16.659 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9d5ee628369a2c222ad292f0b5738ba336ebbc0486d5c28fdc07e84976ec4961 3 00:25:16.659 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:16.659 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:16.659 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9d5ee628369a2c222ad292f0b5738ba336ebbc0486d5c28fdc07e84976ec4961 00:25:16.659 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:16.659 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:16.659 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.GMF 00:25:16.659 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.GMF 00:25:16.659 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.GMF 00:25:16.659 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:16.659 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1322531 00:25:16.659 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 1322531 ']' 00:25:16.659 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:16.659 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:16.659 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:16.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:16.659 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:16.659 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.919 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:16.919 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:25:16.919 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:16.919 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Skf 00:25:16.919 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.919 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.919 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.919 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Mua ]] 00:25:16.919 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Mua 00:25:16.919 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.919 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.919 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.919 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:16.919 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.y2a 00:25:16.919 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.919 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.919 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.919 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.eJy ]] 00:25:16.919 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.eJy 00:25:16.919 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.919 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.919 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.919 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:16.919 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.aSz 00:25:16.919 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.919 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.919 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.919 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.zb4 ]] 00:25:16.919 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.zb4 00:25:16.919 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.919 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.919 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.919 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:16.919 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.CN7 00:25:16.919 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.919 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.919 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.920 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.fh3 ]] 00:25:16.920 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.fh3 00:25:16.920 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.920 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.920 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.920 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:16.920 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.GMF 00:25:16.920 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.920 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.920 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.920 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:16.920 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:16.920 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:16.920 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:16.920 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:16.920 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:16.920 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.920 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.920 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:16.920 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.920 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:16.920 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:16.920 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:16.920 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:16.920 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:16.920 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:16.920 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:16.920 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:16.920 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:16.920 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:25:16.920 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:16.920 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:16.920 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:16.920 07:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:19.454 Waiting for block devices as requested 00:25:19.713 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:19.713 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:19.713 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:19.972 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:19.972 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:19.972 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:20.231 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:20.231 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:20.231 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:20.231 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:20.490 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:20.490 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:20.490 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:20.490 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:20.749 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:20.749 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:20.749 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:21.316 07:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:21.316 07:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:21.316 07:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:21.316 07:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:25:21.316 07:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:21.316 07:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:21.316 07:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:21.316 07:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:21.316 07:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:21.575 No valid GPT data, bailing 00:25:21.575 07:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:21.575 07:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:25:21.575 07:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:25:21.575 07:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:21.575 07:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:21.575 07:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:21.575 07:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:21.575 07:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:21.575 07:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:21.575 07:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:25:21.575 07:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:21.575 07:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:25:21.575 07:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:21.575 07:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:25:21.575 07:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:25:21.575 07:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:25:21.575 07:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:21.575 07:21:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:21.575 00:25:21.575 Discovery Log Number of Records 2, Generation counter 2 00:25:21.575 =====Discovery Log Entry 0====== 00:25:21.575 trtype: tcp 00:25:21.575 adrfam: ipv4 00:25:21.575 subtype: current discovery subsystem 00:25:21.575 treq: not specified, sq flow control disable supported 00:25:21.575 portid: 1 00:25:21.575 trsvcid: 4420 00:25:21.575 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:21.575 traddr: 10.0.0.1 00:25:21.575 eflags: none 00:25:21.575 sectype: none 00:25:21.575 =====Discovery Log Entry 1====== 00:25:21.575 trtype: tcp 00:25:21.575 adrfam: ipv4 00:25:21.575 subtype: nvme subsystem 00:25:21.575 treq: not specified, sq flow control disable supported 00:25:21.575 portid: 1 00:25:21.575 trsvcid: 4420 00:25:21.575 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:21.575 traddr: 10.0.0.1 00:25:21.575 eflags: none 00:25:21.575 sectype: none 00:25:21.575 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:21.575 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:21.575 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:21.575 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:21.575 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.575 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:21.575 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:21.575 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:21.575 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjY2UyZjViYTA3Yjg0MTc4NDAzZTk0YzgwNDkzMzVlZGZhZDBlNTY4NGI3YTJlfx3cbg==: 00:25:21.575 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: 00:25:21.575 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:21.575 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:21.575 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNjY2UyZjViYTA3Yjg0MTc4NDAzZTk0YzgwNDkzMzVlZGZhZDBlNTY4NGI3YTJlfx3cbg==: 00:25:21.575 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: ]] 00:25:21.575 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: 00:25:21.575 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:21.575 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:21.575 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:21.575 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:21.575 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:21.575 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.575 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:21.575 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:21.575 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:21.575 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.575 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:21.575 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.576 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.576 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.576 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.576 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:21.576 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:21.576 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:21.576 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.576 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.576 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:21.576 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.576 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:21.576 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:21.576 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:21.576 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:21.576 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.576 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.835 nvme0n1 00:25:21.835 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.835 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.835 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.835 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.835 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.835 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.835 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.835 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.835 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.835 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.835 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.835 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:21.835 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:21.835 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.835 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:21.835 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.835 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:21.835 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:21.835 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:21.835 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk0NmE0NGU4NzRjNWM3ODA3NTEwMjM0YTIzNTRlYmR1lp2A: 00:25:21.835 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTlhMDc1ZGQ5OGMzYTRkZTlmMzAyYzM2ZTUxMjgzNmI4NmQ3ZDZlMjQ1NGYzODYyYWFlNTE0MWFiYjg3Yjk0NtdECCg=: 00:25:21.835 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:21.835 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:21.835 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjk0NmE0NGU4NzRjNWM3ODA3NTEwMjM0YTIzNTRlYmR1lp2A: 00:25:21.835 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTlhMDc1ZGQ5OGMzYTRkZTlmMzAyYzM2ZTUxMjgzNmI4NmQ3ZDZlMjQ1NGYzODYyYWFlNTE0MWFiYjg3Yjk0NtdECCg=: ]] 00:25:21.835 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTlhMDc1ZGQ5OGMzYTRkZTlmMzAyYzM2ZTUxMjgzNmI4NmQ3ZDZlMjQ1NGYzODYyYWFlNTE0MWFiYjg3Yjk0NtdECCg=: 00:25:21.835 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:21.835 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.835 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:21.835 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:21.835 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:21.835 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.835 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:21.836 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.836 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.836 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:21.836 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.836 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:21.836 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:21.836 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:21.836 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.836 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.836 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:21.836 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.836 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:21.836 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:21.836 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:21.836 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:21.836 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:21.836 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.095 nvme0n1 00:25:22.095 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.095 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.095 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.095 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.095 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.095 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.095 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.095 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.095 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.095 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.095 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.095 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.095 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:22.095 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.095 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:22.095 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:22.095 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:22.096 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjY2UyZjViYTA3Yjg0MTc4NDAzZTk0YzgwNDkzMzVlZGZhZDBlNTY4NGI3YTJlfx3cbg==: 00:25:22.096 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: 00:25:22.096 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:22.096 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:22.096 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNjY2UyZjViYTA3Yjg0MTc4NDAzZTk0YzgwNDkzMzVlZGZhZDBlNTY4NGI3YTJlfx3cbg==: 00:25:22.096 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: ]] 00:25:22.096 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: 00:25:22.096 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:22.096 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.096 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:22.096 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:22.096 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:22.096 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.096 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:22.096 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.096 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.096 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.096 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.096 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:22.096 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:22.096 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:22.096 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.096 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.096 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:22.096 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.096 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:22.096 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:22.096 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:22.096 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:22.096 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.096 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.374 nvme0n1 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDYwMDY2YTFkNzY2ZGJjY2Y5NmNmYTllMzE5NzU3MDS3GpaZ: 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDYwMDY2YTFkNzY2ZGJjY2Y5NmNmYTllMzE5NzU3MDS3GpaZ: 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: ]] 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.374 nvme0n1 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.374 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.375 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.375 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.375 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.375 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.690 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.690 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.690 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.690 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.690 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.690 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.690 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:22.690 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.690 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:22.690 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:22.690 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:22.690 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg5MjQzZmFiNmEwODY3ZGI5ZGQ3YTkyMjQyZTZkYWE0NTgwMTMwNTk1MDVhNjQ34z8nYg==: 00:25:22.690 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjQ1YmU1ZjgxNzRkNzQ0NGYwOTZmMWVhYWYzNTc1NTA63hmQ: 00:25:22.690 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:22.690 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:22.690 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg5MjQzZmFiNmEwODY3ZGI5ZGQ3YTkyMjQyZTZkYWE0NTgwMTMwNTk1MDVhNjQ34z8nYg==: 00:25:22.690 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjQ1YmU1ZjgxNzRkNzQ0NGYwOTZmMWVhYWYzNTc1NTA63hmQ: ]] 00:25:22.690 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjQ1YmU1ZjgxNzRkNzQ0NGYwOTZmMWVhYWYzNTc1NTA63hmQ: 00:25:22.690 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:22.690 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.690 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:22.690 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:22.690 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:22.690 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.690 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:22.690 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.690 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.690 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.690 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.690 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:22.690 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:22.690 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:22.690 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.690 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.690 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:22.690 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.690 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:22.690 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:22.690 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:22.690 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:22.690 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.690 07:21:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.690 nvme0n1 00:25:22.690 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.690 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.690 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.690 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.690 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.690 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.691 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.691 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.691 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.691 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.691 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.691 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.691 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:22.691 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.691 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:22.691 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:22.691 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:22.691 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWQ1ZWU2MjgzNjlhMmMyMjJhZDI5MmYwYjU3MzhiYTMzNmViYmMwNDg2ZDVjMjhmZGMwN2U4NDk3NmVjNDk2MYcacz8=: 00:25:22.691 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:22.691 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:22.691 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:22.691 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWQ1ZWU2MjgzNjlhMmMyMjJhZDI5MmYwYjU3MzhiYTMzNmViYmMwNDg2ZDVjMjhmZGMwN2U4NDk3NmVjNDk2MYcacz8=: 00:25:22.691 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:22.691 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:22.691 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.691 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:22.691 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:22.691 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:22.691 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.691 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:22.691 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.691 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.691 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:22.691 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.691 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:22.691 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:22.691 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:22.691 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.691 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.691 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:22.691 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.691 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:22.691 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:22.691 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:22.691 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:22.691 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:22.691 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.008 nvme0n1 00:25:23.008 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.008 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.008 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.008 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.008 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.008 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.008 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.008 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.008 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.008 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.008 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.008 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:23.008 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.008 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:23.008 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.008 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:23.008 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:23.008 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:23.008 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk0NmE0NGU4NzRjNWM3ODA3NTEwMjM0YTIzNTRlYmR1lp2A: 00:25:23.008 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTlhMDc1ZGQ5OGMzYTRkZTlmMzAyYzM2ZTUxMjgzNmI4NmQ3ZDZlMjQ1NGYzODYyYWFlNTE0MWFiYjg3Yjk0NtdECCg=: 00:25:23.008 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:23.008 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:23.008 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjk0NmE0NGU4NzRjNWM3ODA3NTEwMjM0YTIzNTRlYmR1lp2A: 00:25:23.008 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTlhMDc1ZGQ5OGMzYTRkZTlmMzAyYzM2ZTUxMjgzNmI4NmQ3ZDZlMjQ1NGYzODYyYWFlNTE0MWFiYjg3Yjk0NtdECCg=: ]] 00:25:23.008 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTlhMDc1ZGQ5OGMzYTRkZTlmMzAyYzM2ZTUxMjgzNmI4NmQ3ZDZlMjQ1NGYzODYyYWFlNTE0MWFiYjg3Yjk0NtdECCg=: 00:25:23.008 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:23.008 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.008 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:23.008 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:23.008 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:23.008 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.008 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:23.008 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.008 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.008 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.008 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.008 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.008 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.008 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.009 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.009 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.009 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.009 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.009 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.009 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.009 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.009 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:23.009 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.009 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.293 nvme0n1 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjY2UyZjViYTA3Yjg0MTc4NDAzZTk0YzgwNDkzMzVlZGZhZDBlNTY4NGI3YTJlfx3cbg==: 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNjY2UyZjViYTA3Yjg0MTc4NDAzZTk0YzgwNDkzMzVlZGZhZDBlNTY4NGI3YTJlfx3cbg==: 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: ]] 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.293 nvme0n1 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.293 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.553 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.553 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.553 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.553 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.553 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.553 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.553 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.553 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:23.553 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.553 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:23.553 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:23.553 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:23.553 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDYwMDY2YTFkNzY2ZGJjY2Y5NmNmYTllMzE5NzU3MDS3GpaZ: 00:25:23.553 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: 00:25:23.553 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:23.553 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:23.553 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDYwMDY2YTFkNzY2ZGJjY2Y5NmNmYTllMzE5NzU3MDS3GpaZ: 00:25:23.553 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: ]] 00:25:23.553 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: 00:25:23.553 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:23.553 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.553 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:23.553 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:23.553 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:23.553 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.553 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:23.553 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.553 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.553 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.553 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.553 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.553 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.553 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.553 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.553 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.553 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.554 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.554 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.554 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.554 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.554 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:23.554 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.554 07:21:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.554 nvme0n1 00:25:23.554 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.554 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.554 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.554 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.554 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg5MjQzZmFiNmEwODY3ZGI5ZGQ3YTkyMjQyZTZkYWE0NTgwMTMwNTk1MDVhNjQ34z8nYg==: 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjQ1YmU1ZjgxNzRkNzQ0NGYwOTZmMWVhYWYzNTc1NTA63hmQ: 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg5MjQzZmFiNmEwODY3ZGI5ZGQ3YTkyMjQyZTZkYWE0NTgwMTMwNTk1MDVhNjQ34z8nYg==: 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjQ1YmU1ZjgxNzRkNzQ0NGYwOTZmMWVhYWYzNTc1NTA63hmQ: ]] 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjQ1YmU1ZjgxNzRkNzQ0NGYwOTZmMWVhYWYzNTc1NTA63hmQ: 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.814 nvme0n1 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.814 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.074 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.074 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.074 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.074 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.074 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.074 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.074 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:24.074 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.074 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:24.074 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:24.074 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:24.074 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWQ1ZWU2MjgzNjlhMmMyMjJhZDI5MmYwYjU3MzhiYTMzNmViYmMwNDg2ZDVjMjhmZGMwN2U4NDk3NmVjNDk2MYcacz8=: 00:25:24.074 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:24.074 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:24.074 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:24.074 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWQ1ZWU2MjgzNjlhMmMyMjJhZDI5MmYwYjU3MzhiYTMzNmViYmMwNDg2ZDVjMjhmZGMwN2U4NDk3NmVjNDk2MYcacz8=: 00:25:24.074 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:24.074 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:24.074 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.074 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:24.074 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:24.074 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:24.074 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.074 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:24.074 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.074 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.074 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.074 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.074 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.074 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.074 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.074 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.074 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.075 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.075 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.075 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.075 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.075 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.075 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:24.075 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.075 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.075 nvme0n1 00:25:24.075 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.075 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.075 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.075 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.075 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.334 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.334 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.334 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.334 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.334 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.334 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.334 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:24.334 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.334 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:24.334 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.334 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:24.334 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:24.334 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:24.334 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk0NmE0NGU4NzRjNWM3ODA3NTEwMjM0YTIzNTRlYmR1lp2A: 00:25:24.334 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTlhMDc1ZGQ5OGMzYTRkZTlmMzAyYzM2ZTUxMjgzNmI4NmQ3ZDZlMjQ1NGYzODYyYWFlNTE0MWFiYjg3Yjk0NtdECCg=: 00:25:24.334 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:24.334 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:24.334 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjk0NmE0NGU4NzRjNWM3ODA3NTEwMjM0YTIzNTRlYmR1lp2A: 00:25:24.334 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTlhMDc1ZGQ5OGMzYTRkZTlmMzAyYzM2ZTUxMjgzNmI4NmQ3ZDZlMjQ1NGYzODYyYWFlNTE0MWFiYjg3Yjk0NtdECCg=: ]] 00:25:24.334 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTlhMDc1ZGQ5OGMzYTRkZTlmMzAyYzM2ZTUxMjgzNmI4NmQ3ZDZlMjQ1NGYzODYyYWFlNTE0MWFiYjg3Yjk0NtdECCg=: 00:25:24.334 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:24.334 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.334 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:24.334 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:24.334 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:24.334 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.334 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:24.334 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.334 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.334 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.334 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.334 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.334 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.334 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.334 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.334 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.334 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.334 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.334 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.335 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.335 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.335 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:24.335 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.335 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.594 nvme0n1 00:25:24.594 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.594 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.594 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.594 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.594 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.594 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.594 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.594 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.594 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.594 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.594 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.594 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.594 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:24.594 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.594 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:24.594 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:24.594 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:24.594 07:21:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjY2UyZjViYTA3Yjg0MTc4NDAzZTk0YzgwNDkzMzVlZGZhZDBlNTY4NGI3YTJlfx3cbg==: 00:25:24.594 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: 00:25:24.595 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:24.595 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:24.595 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNjY2UyZjViYTA3Yjg0MTc4NDAzZTk0YzgwNDkzMzVlZGZhZDBlNTY4NGI3YTJlfx3cbg==: 00:25:24.595 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: ]] 00:25:24.595 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: 00:25:24.595 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:24.595 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.595 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:24.595 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:24.595 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:24.595 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.595 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:24.595 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.595 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.595 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.595 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.595 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.595 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.595 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.595 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.595 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.595 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.595 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.595 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.595 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.595 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.595 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:24.595 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.595 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.866 nvme0n1 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDYwMDY2YTFkNzY2ZGJjY2Y5NmNmYTllMzE5NzU3MDS3GpaZ: 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDYwMDY2YTFkNzY2ZGJjY2Y5NmNmYTllMzE5NzU3MDS3GpaZ: 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: ]] 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.866 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.125 nvme0n1 00:25:25.125 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.125 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.125 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.125 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.125 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.125 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.125 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.125 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.125 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.125 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.126 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.126 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.126 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:25.126 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.126 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:25.126 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:25.126 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:25.126 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg5MjQzZmFiNmEwODY3ZGI5ZGQ3YTkyMjQyZTZkYWE0NTgwMTMwNTk1MDVhNjQ34z8nYg==: 00:25:25.126 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjQ1YmU1ZjgxNzRkNzQ0NGYwOTZmMWVhYWYzNTc1NTA63hmQ: 00:25:25.126 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:25.126 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:25.126 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg5MjQzZmFiNmEwODY3ZGI5ZGQ3YTkyMjQyZTZkYWE0NTgwMTMwNTk1MDVhNjQ34z8nYg==: 00:25:25.126 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjQ1YmU1ZjgxNzRkNzQ0NGYwOTZmMWVhYWYzNTc1NTA63hmQ: ]] 00:25:25.126 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjQ1YmU1ZjgxNzRkNzQ0NGYwOTZmMWVhYWYzNTc1NTA63hmQ: 00:25:25.126 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:25.126 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.126 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:25.126 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:25.126 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:25.126 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.126 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:25.126 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.126 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.126 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.126 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.126 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.126 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.385 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.385 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.385 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.385 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.385 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.385 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.385 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.385 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.385 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:25.385 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.385 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.385 nvme0n1 00:25:25.385 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.385 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.385 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.385 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.385 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.646 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.646 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.646 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.646 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.646 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.646 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.646 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.646 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:25.646 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.646 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:25.646 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:25.646 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:25.646 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWQ1ZWU2MjgzNjlhMmMyMjJhZDI5MmYwYjU3MzhiYTMzNmViYmMwNDg2ZDVjMjhmZGMwN2U4NDk3NmVjNDk2MYcacz8=: 00:25:25.646 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:25.646 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:25.646 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:25.646 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWQ1ZWU2MjgzNjlhMmMyMjJhZDI5MmYwYjU3MzhiYTMzNmViYmMwNDg2ZDVjMjhmZGMwN2U4NDk3NmVjNDk2MYcacz8=: 00:25:25.646 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:25.646 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:25.646 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.646 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:25.646 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:25.646 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:25.646 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.646 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:25.646 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.646 07:21:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.646 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.646 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.646 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.646 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.646 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.646 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.646 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.646 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.646 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.646 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.646 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.646 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.646 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:25.646 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.646 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.906 nvme0n1 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk0NmE0NGU4NzRjNWM3ODA3NTEwMjM0YTIzNTRlYmR1lp2A: 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTlhMDc1ZGQ5OGMzYTRkZTlmMzAyYzM2ZTUxMjgzNmI4NmQ3ZDZlMjQ1NGYzODYyYWFlNTE0MWFiYjg3Yjk0NtdECCg=: 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjk0NmE0NGU4NzRjNWM3ODA3NTEwMjM0YTIzNTRlYmR1lp2A: 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTlhMDc1ZGQ5OGMzYTRkZTlmMzAyYzM2ZTUxMjgzNmI4NmQ3ZDZlMjQ1NGYzODYyYWFlNTE0MWFiYjg3Yjk0NtdECCg=: ]] 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTlhMDc1ZGQ5OGMzYTRkZTlmMzAyYzM2ZTUxMjgzNmI4NmQ3ZDZlMjQ1NGYzODYyYWFlNTE0MWFiYjg3Yjk0NtdECCg=: 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.906 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.474 nvme0n1 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjY2UyZjViYTA3Yjg0MTc4NDAzZTk0YzgwNDkzMzVlZGZhZDBlNTY4NGI3YTJlfx3cbg==: 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNjY2UyZjViYTA3Yjg0MTc4NDAzZTk0YzgwNDkzMzVlZGZhZDBlNTY4NGI3YTJlfx3cbg==: 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: ]] 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.474 07:21:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.734 nvme0n1 00:25:26.734 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.734 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.734 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.734 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.734 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.734 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.734 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.734 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.734 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.734 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.734 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.734 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.734 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:26.734 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.734 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:26.734 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:26.734 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:26.734 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDYwMDY2YTFkNzY2ZGJjY2Y5NmNmYTllMzE5NzU3MDS3GpaZ: 00:25:26.734 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: 00:25:26.734 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:26.734 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:26.734 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDYwMDY2YTFkNzY2ZGJjY2Y5NmNmYTllMzE5NzU3MDS3GpaZ: 00:25:26.734 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: ]] 00:25:26.734 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: 00:25:26.734 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:26.734 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.734 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:26.734 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:26.734 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:26.734 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.734 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:26.734 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.734 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.734 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.734 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.734 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:26.734 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:26.735 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:26.735 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.735 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.735 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:26.735 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.735 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:26.735 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:26.735 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:26.735 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:26.735 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.735 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.304 nvme0n1 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg5MjQzZmFiNmEwODY3ZGI5ZGQ3YTkyMjQyZTZkYWE0NTgwMTMwNTk1MDVhNjQ34z8nYg==: 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjQ1YmU1ZjgxNzRkNzQ0NGYwOTZmMWVhYWYzNTc1NTA63hmQ: 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg5MjQzZmFiNmEwODY3ZGI5ZGQ3YTkyMjQyZTZkYWE0NTgwMTMwNTk1MDVhNjQ34z8nYg==: 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjQ1YmU1ZjgxNzRkNzQ0NGYwOTZmMWVhYWYzNTc1NTA63hmQ: ]] 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjQ1YmU1ZjgxNzRkNzQ0NGYwOTZmMWVhYWYzNTc1NTA63hmQ: 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.304 07:21:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.563 nvme0n1 00:25:27.563 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.563 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.563 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.563 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.563 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.563 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.822 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.822 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.822 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.822 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.822 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.822 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.822 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:27.822 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.822 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:27.822 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:27.822 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:27.822 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWQ1ZWU2MjgzNjlhMmMyMjJhZDI5MmYwYjU3MzhiYTMzNmViYmMwNDg2ZDVjMjhmZGMwN2U4NDk3NmVjNDk2MYcacz8=: 00:25:27.822 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:27.822 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:27.822 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:27.822 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWQ1ZWU2MjgzNjlhMmMyMjJhZDI5MmYwYjU3MzhiYTMzNmViYmMwNDg2ZDVjMjhmZGMwN2U4NDk3NmVjNDk2MYcacz8=: 00:25:27.822 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:27.822 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:27.822 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.822 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:27.822 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:27.822 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:27.822 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.822 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:27.822 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.822 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.822 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.822 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.822 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:27.822 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:27.822 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:27.822 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.822 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.822 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:27.822 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.822 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:27.822 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:27.822 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:27.822 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:27.822 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.822 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.081 nvme0n1 00:25:28.081 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.081 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.081 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.081 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.081 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.081 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.081 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.081 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.081 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.081 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.081 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.081 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:28.081 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.081 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:28.081 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.081 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:28.081 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:28.081 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:28.081 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk0NmE0NGU4NzRjNWM3ODA3NTEwMjM0YTIzNTRlYmR1lp2A: 00:25:28.081 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTlhMDc1ZGQ5OGMzYTRkZTlmMzAyYzM2ZTUxMjgzNmI4NmQ3ZDZlMjQ1NGYzODYyYWFlNTE0MWFiYjg3Yjk0NtdECCg=: 00:25:28.081 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:28.081 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:28.081 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjk0NmE0NGU4NzRjNWM3ODA3NTEwMjM0YTIzNTRlYmR1lp2A: 00:25:28.082 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTlhMDc1ZGQ5OGMzYTRkZTlmMzAyYzM2ZTUxMjgzNmI4NmQ3ZDZlMjQ1NGYzODYyYWFlNTE0MWFiYjg3Yjk0NtdECCg=: ]] 00:25:28.082 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTlhMDc1ZGQ5OGMzYTRkZTlmMzAyYzM2ZTUxMjgzNmI4NmQ3ZDZlMjQ1NGYzODYyYWFlNTE0MWFiYjg3Yjk0NtdECCg=: 00:25:28.082 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:28.082 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.082 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:28.082 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:28.082 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:28.082 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.082 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:28.082 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.082 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.082 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.082 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.082 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:28.082 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:28.082 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:28.082 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.082 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.082 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:28.082 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.082 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:28.082 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:28.082 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:28.082 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:28.082 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.082 07:21:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.020 nvme0n1 00:25:29.020 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.020 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.020 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.020 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.020 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.020 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.020 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.020 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.020 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.020 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.020 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.020 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.020 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:29.021 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.021 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:29.021 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:29.021 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:29.021 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjY2UyZjViYTA3Yjg0MTc4NDAzZTk0YzgwNDkzMzVlZGZhZDBlNTY4NGI3YTJlfx3cbg==: 00:25:29.021 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: 00:25:29.021 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:29.021 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:29.021 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNjY2UyZjViYTA3Yjg0MTc4NDAzZTk0YzgwNDkzMzVlZGZhZDBlNTY4NGI3YTJlfx3cbg==: 00:25:29.021 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: ]] 00:25:29.021 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: 00:25:29.021 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:29.021 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.021 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:29.021 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:29.021 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:29.021 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.021 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:29.021 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.021 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.021 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.021 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.021 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:29.021 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:29.021 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:29.021 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.021 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.021 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:29.021 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.021 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:29.021 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:29.021 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:29.021 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:29.021 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.021 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.590 nvme0n1 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDYwMDY2YTFkNzY2ZGJjY2Y5NmNmYTllMzE5NzU3MDS3GpaZ: 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDYwMDY2YTFkNzY2ZGJjY2Y5NmNmYTllMzE5NzU3MDS3GpaZ: 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: ]] 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.590 07:21:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.159 nvme0n1 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg5MjQzZmFiNmEwODY3ZGI5ZGQ3YTkyMjQyZTZkYWE0NTgwMTMwNTk1MDVhNjQ34z8nYg==: 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjQ1YmU1ZjgxNzRkNzQ0NGYwOTZmMWVhYWYzNTc1NTA63hmQ: 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg5MjQzZmFiNmEwODY3ZGI5ZGQ3YTkyMjQyZTZkYWE0NTgwMTMwNTk1MDVhNjQ34z8nYg==: 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjQ1YmU1ZjgxNzRkNzQ0NGYwOTZmMWVhYWYzNTc1NTA63hmQ: ]] 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjQ1YmU1ZjgxNzRkNzQ0NGYwOTZmMWVhYWYzNTc1NTA63hmQ: 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.159 07:21:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.729 nvme0n1 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWQ1ZWU2MjgzNjlhMmMyMjJhZDI5MmYwYjU3MzhiYTMzNmViYmMwNDg2ZDVjMjhmZGMwN2U4NDk3NmVjNDk2MYcacz8=: 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWQ1ZWU2MjgzNjlhMmMyMjJhZDI5MmYwYjU3MzhiYTMzNmViYmMwNDg2ZDVjMjhmZGMwN2U4NDk3NmVjNDk2MYcacz8=: 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.729 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.298 nvme0n1 00:25:31.298 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.298 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.298 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.298 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.298 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.558 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.558 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.558 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.558 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.558 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.558 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.558 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:31.558 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:31.558 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.558 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:31.558 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.558 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:31.558 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:31.558 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:31.558 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk0NmE0NGU4NzRjNWM3ODA3NTEwMjM0YTIzNTRlYmR1lp2A: 00:25:31.558 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTlhMDc1ZGQ5OGMzYTRkZTlmMzAyYzM2ZTUxMjgzNmI4NmQ3ZDZlMjQ1NGYzODYyYWFlNTE0MWFiYjg3Yjk0NtdECCg=: 00:25:31.558 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:31.558 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:31.558 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjk0NmE0NGU4NzRjNWM3ODA3NTEwMjM0YTIzNTRlYmR1lp2A: 00:25:31.558 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTlhMDc1ZGQ5OGMzYTRkZTlmMzAyYzM2ZTUxMjgzNmI4NmQ3ZDZlMjQ1NGYzODYyYWFlNTE0MWFiYjg3Yjk0NtdECCg=: ]] 00:25:31.558 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTlhMDc1ZGQ5OGMzYTRkZTlmMzAyYzM2ZTUxMjgzNmI4NmQ3ZDZlMjQ1NGYzODYyYWFlNTE0MWFiYjg3Yjk0NtdECCg=: 00:25:31.558 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:31.558 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.558 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:31.558 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:31.558 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:31.558 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.558 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:31.558 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.558 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.558 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.558 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.558 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.558 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.558 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.558 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.558 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.558 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.558 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.558 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.558 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.558 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.558 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:31.558 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.558 07:21:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.558 nvme0n1 00:25:31.558 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.558 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.558 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.558 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.558 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.558 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.816 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.816 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.816 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.816 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.816 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.816 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.816 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:31.816 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.816 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:31.816 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:31.816 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:31.816 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjY2UyZjViYTA3Yjg0MTc4NDAzZTk0YzgwNDkzMzVlZGZhZDBlNTY4NGI3YTJlfx3cbg==: 00:25:31.816 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: 00:25:31.816 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:31.816 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:31.816 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNjY2UyZjViYTA3Yjg0MTc4NDAzZTk0YzgwNDkzMzVlZGZhZDBlNTY4NGI3YTJlfx3cbg==: 00:25:31.816 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: ]] 00:25:31.816 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: 00:25:31.816 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:31.816 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.816 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:31.816 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:31.816 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:31.816 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.816 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:31.816 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.816 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.816 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.816 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.816 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.816 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.816 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.816 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.816 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.816 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.816 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.816 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.816 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.816 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.816 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:31.816 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.816 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.816 nvme0n1 00:25:31.816 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.816 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.817 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.817 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.817 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.817 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.817 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.817 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.817 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.817 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.817 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.817 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.817 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:31.817 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.817 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:31.817 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:31.817 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:31.817 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDYwMDY2YTFkNzY2ZGJjY2Y5NmNmYTllMzE5NzU3MDS3GpaZ: 00:25:31.817 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: 00:25:31.817 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:31.817 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:31.817 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDYwMDY2YTFkNzY2ZGJjY2Y5NmNmYTllMzE5NzU3MDS3GpaZ: 00:25:31.817 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: ]] 00:25:31.817 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: 00:25:31.817 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:31.817 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.817 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:31.817 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:32.075 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:32.075 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.075 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:32.075 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.075 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.075 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.075 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.075 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.075 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.075 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.075 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.075 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.075 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.075 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.075 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.075 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.075 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.075 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:32.075 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.075 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.075 nvme0n1 00:25:32.075 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.075 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.075 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.075 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.075 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.075 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.075 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.075 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.075 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.075 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.075 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.075 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.075 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:32.075 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.075 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:32.075 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:32.075 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:32.075 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg5MjQzZmFiNmEwODY3ZGI5ZGQ3YTkyMjQyZTZkYWE0NTgwMTMwNTk1MDVhNjQ34z8nYg==: 00:25:32.075 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjQ1YmU1ZjgxNzRkNzQ0NGYwOTZmMWVhYWYzNTc1NTA63hmQ: 00:25:32.075 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:32.075 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:32.075 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg5MjQzZmFiNmEwODY3ZGI5ZGQ3YTkyMjQyZTZkYWE0NTgwMTMwNTk1MDVhNjQ34z8nYg==: 00:25:32.075 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjQ1YmU1ZjgxNzRkNzQ0NGYwOTZmMWVhYWYzNTc1NTA63hmQ: ]] 00:25:32.076 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjQ1YmU1ZjgxNzRkNzQ0NGYwOTZmMWVhYWYzNTc1NTA63hmQ: 00:25:32.076 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:32.076 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.076 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:32.076 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:32.076 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:32.076 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.076 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:32.076 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.076 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.076 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.076 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.076 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.076 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.076 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.076 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.076 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.076 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.076 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.076 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.076 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.076 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.076 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:32.076 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.076 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.335 nvme0n1 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWQ1ZWU2MjgzNjlhMmMyMjJhZDI5MmYwYjU3MzhiYTMzNmViYmMwNDg2ZDVjMjhmZGMwN2U4NDk3NmVjNDk2MYcacz8=: 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWQ1ZWU2MjgzNjlhMmMyMjJhZDI5MmYwYjU3MzhiYTMzNmViYmMwNDg2ZDVjMjhmZGMwN2U4NDk3NmVjNDk2MYcacz8=: 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.335 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.594 nvme0n1 00:25:32.594 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.594 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.594 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.594 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.594 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.594 07:21:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.594 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.594 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.594 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.594 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.594 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.594 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:32.594 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.594 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:32.594 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.594 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:32.594 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:32.594 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:32.594 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk0NmE0NGU4NzRjNWM3ODA3NTEwMjM0YTIzNTRlYmR1lp2A: 00:25:32.594 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTlhMDc1ZGQ5OGMzYTRkZTlmMzAyYzM2ZTUxMjgzNmI4NmQ3ZDZlMjQ1NGYzODYyYWFlNTE0MWFiYjg3Yjk0NtdECCg=: 00:25:32.594 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:32.594 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:32.595 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjk0NmE0NGU4NzRjNWM3ODA3NTEwMjM0YTIzNTRlYmR1lp2A: 00:25:32.595 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTlhMDc1ZGQ5OGMzYTRkZTlmMzAyYzM2ZTUxMjgzNmI4NmQ3ZDZlMjQ1NGYzODYyYWFlNTE0MWFiYjg3Yjk0NtdECCg=: ]] 00:25:32.595 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTlhMDc1ZGQ5OGMzYTRkZTlmMzAyYzM2ZTUxMjgzNmI4NmQ3ZDZlMjQ1NGYzODYyYWFlNTE0MWFiYjg3Yjk0NtdECCg=: 00:25:32.595 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:32.595 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.595 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:32.595 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:32.595 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:32.595 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.595 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:32.595 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.595 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.595 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.595 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.595 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.595 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.595 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.595 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.595 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.595 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.595 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.595 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.595 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.595 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.595 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:32.595 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.595 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.854 nvme0n1 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjY2UyZjViYTA3Yjg0MTc4NDAzZTk0YzgwNDkzMzVlZGZhZDBlNTY4NGI3YTJlfx3cbg==: 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNjY2UyZjViYTA3Yjg0MTc4NDAzZTk0YzgwNDkzMzVlZGZhZDBlNTY4NGI3YTJlfx3cbg==: 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: ]] 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.854 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.112 nvme0n1 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDYwMDY2YTFkNzY2ZGJjY2Y5NmNmYTllMzE5NzU3MDS3GpaZ: 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDYwMDY2YTFkNzY2ZGJjY2Y5NmNmYTllMzE5NzU3MDS3GpaZ: 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: ]] 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.112 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.372 nvme0n1 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg5MjQzZmFiNmEwODY3ZGI5ZGQ3YTkyMjQyZTZkYWE0NTgwMTMwNTk1MDVhNjQ34z8nYg==: 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjQ1YmU1ZjgxNzRkNzQ0NGYwOTZmMWVhYWYzNTc1NTA63hmQ: 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg5MjQzZmFiNmEwODY3ZGI5ZGQ3YTkyMjQyZTZkYWE0NTgwMTMwNTk1MDVhNjQ34z8nYg==: 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjQ1YmU1ZjgxNzRkNzQ0NGYwOTZmMWVhYWYzNTc1NTA63hmQ: ]] 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjQ1YmU1ZjgxNzRkNzQ0NGYwOTZmMWVhYWYzNTc1NTA63hmQ: 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.372 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.631 nvme0n1 00:25:33.632 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.632 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.632 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.632 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.632 07:21:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.632 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.632 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.632 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.632 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.632 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.632 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.632 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.632 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:33.632 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.632 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:33.632 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:33.632 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:33.632 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWQ1ZWU2MjgzNjlhMmMyMjJhZDI5MmYwYjU3MzhiYTMzNmViYmMwNDg2ZDVjMjhmZGMwN2U4NDk3NmVjNDk2MYcacz8=: 00:25:33.632 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:33.632 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:33.632 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:33.632 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWQ1ZWU2MjgzNjlhMmMyMjJhZDI5MmYwYjU3MzhiYTMzNmViYmMwNDg2ZDVjMjhmZGMwN2U4NDk3NmVjNDk2MYcacz8=: 00:25:33.632 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:33.632 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:33.632 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.632 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:33.632 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:33.632 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:33.632 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.632 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:33.632 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.632 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.632 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.632 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.632 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.632 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.632 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.632 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.632 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.632 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.632 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.632 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.632 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.632 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.632 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:33.632 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.632 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.891 nvme0n1 00:25:33.891 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.891 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.891 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk0NmE0NGU4NzRjNWM3ODA3NTEwMjM0YTIzNTRlYmR1lp2A: 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTlhMDc1ZGQ5OGMzYTRkZTlmMzAyYzM2ZTUxMjgzNmI4NmQ3ZDZlMjQ1NGYzODYyYWFlNTE0MWFiYjg3Yjk0NtdECCg=: 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjk0NmE0NGU4NzRjNWM3ODA3NTEwMjM0YTIzNTRlYmR1lp2A: 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTlhMDc1ZGQ5OGMzYTRkZTlmMzAyYzM2ZTUxMjgzNmI4NmQ3ZDZlMjQ1NGYzODYyYWFlNTE0MWFiYjg3Yjk0NtdECCg=: ]] 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTlhMDc1ZGQ5OGMzYTRkZTlmMzAyYzM2ZTUxMjgzNmI4NmQ3ZDZlMjQ1NGYzODYyYWFlNTE0MWFiYjg3Yjk0NtdECCg=: 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.892 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.151 nvme0n1 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjY2UyZjViYTA3Yjg0MTc4NDAzZTk0YzgwNDkzMzVlZGZhZDBlNTY4NGI3YTJlfx3cbg==: 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNjY2UyZjViYTA3Yjg0MTc4NDAzZTk0YzgwNDkzMzVlZGZhZDBlNTY4NGI3YTJlfx3cbg==: 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: ]] 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.152 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.410 nvme0n1 00:25:34.410 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.410 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.410 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.410 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.410 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.410 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.410 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.411 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.411 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.411 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.669 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.669 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.669 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:34.669 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.669 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:34.669 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:34.669 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:34.669 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDYwMDY2YTFkNzY2ZGJjY2Y5NmNmYTllMzE5NzU3MDS3GpaZ: 00:25:34.669 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: 00:25:34.669 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:34.669 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:34.669 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDYwMDY2YTFkNzY2ZGJjY2Y5NmNmYTllMzE5NzU3MDS3GpaZ: 00:25:34.669 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: ]] 00:25:34.669 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: 00:25:34.669 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:34.669 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.669 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:34.669 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:34.669 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:34.669 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.669 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:34.669 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.669 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.669 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.669 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.669 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.669 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.669 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.669 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.669 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.669 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.669 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.669 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.669 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.669 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.669 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:34.669 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.669 07:21:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.928 nvme0n1 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg5MjQzZmFiNmEwODY3ZGI5ZGQ3YTkyMjQyZTZkYWE0NTgwMTMwNTk1MDVhNjQ34z8nYg==: 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjQ1YmU1ZjgxNzRkNzQ0NGYwOTZmMWVhYWYzNTc1NTA63hmQ: 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg5MjQzZmFiNmEwODY3ZGI5ZGQ3YTkyMjQyZTZkYWE0NTgwMTMwNTk1MDVhNjQ34z8nYg==: 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjQ1YmU1ZjgxNzRkNzQ0NGYwOTZmMWVhYWYzNTc1NTA63hmQ: ]] 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjQ1YmU1ZjgxNzRkNzQ0NGYwOTZmMWVhYWYzNTc1NTA63hmQ: 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.928 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.186 nvme0n1 00:25:35.186 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.186 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.186 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.186 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.186 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.186 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.186 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.186 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.186 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.186 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.186 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.186 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.186 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:35.186 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.186 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:35.186 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:35.186 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:35.186 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWQ1ZWU2MjgzNjlhMmMyMjJhZDI5MmYwYjU3MzhiYTMzNmViYmMwNDg2ZDVjMjhmZGMwN2U4NDk3NmVjNDk2MYcacz8=: 00:25:35.186 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:35.186 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:35.186 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:35.187 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWQ1ZWU2MjgzNjlhMmMyMjJhZDI5MmYwYjU3MzhiYTMzNmViYmMwNDg2ZDVjMjhmZGMwN2U4NDk3NmVjNDk2MYcacz8=: 00:25:35.187 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:35.187 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:35.187 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.187 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:35.187 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:35.187 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:35.187 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.187 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:35.187 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.187 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.187 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.187 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.187 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.187 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.187 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.187 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.187 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.187 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.187 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.187 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.187 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.187 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.187 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:35.187 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.187 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.446 nvme0n1 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk0NmE0NGU4NzRjNWM3ODA3NTEwMjM0YTIzNTRlYmR1lp2A: 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTlhMDc1ZGQ5OGMzYTRkZTlmMzAyYzM2ZTUxMjgzNmI4NmQ3ZDZlMjQ1NGYzODYyYWFlNTE0MWFiYjg3Yjk0NtdECCg=: 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjk0NmE0NGU4NzRjNWM3ODA3NTEwMjM0YTIzNTRlYmR1lp2A: 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTlhMDc1ZGQ5OGMzYTRkZTlmMzAyYzM2ZTUxMjgzNmI4NmQ3ZDZlMjQ1NGYzODYyYWFlNTE0MWFiYjg3Yjk0NtdECCg=: ]] 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTlhMDc1ZGQ5OGMzYTRkZTlmMzAyYzM2ZTUxMjgzNmI4NmQ3ZDZlMjQ1NGYzODYyYWFlNTE0MWFiYjg3Yjk0NtdECCg=: 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.446 07:21:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.014 nvme0n1 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjY2UyZjViYTA3Yjg0MTc4NDAzZTk0YzgwNDkzMzVlZGZhZDBlNTY4NGI3YTJlfx3cbg==: 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNjY2UyZjViYTA3Yjg0MTc4NDAzZTk0YzgwNDkzMzVlZGZhZDBlNTY4NGI3YTJlfx3cbg==: 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: ]] 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.014 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.273 nvme0n1 00:25:36.273 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.273 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.273 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.273 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.273 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.273 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.273 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.273 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.273 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.273 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.532 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.532 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.532 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:36.532 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.532 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:36.532 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:36.532 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:36.532 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDYwMDY2YTFkNzY2ZGJjY2Y5NmNmYTllMzE5NzU3MDS3GpaZ: 00:25:36.532 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: 00:25:36.532 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:36.532 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:36.532 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDYwMDY2YTFkNzY2ZGJjY2Y5NmNmYTllMzE5NzU3MDS3GpaZ: 00:25:36.532 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: ]] 00:25:36.532 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: 00:25:36.532 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:36.532 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.532 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:36.532 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:36.532 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:36.532 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.532 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:36.532 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.532 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.532 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.532 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.532 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.532 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.532 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.532 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.532 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.532 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.532 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.532 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.532 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.532 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.532 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:36.532 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.532 07:21:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.792 nvme0n1 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg5MjQzZmFiNmEwODY3ZGI5ZGQ3YTkyMjQyZTZkYWE0NTgwMTMwNTk1MDVhNjQ34z8nYg==: 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjQ1YmU1ZjgxNzRkNzQ0NGYwOTZmMWVhYWYzNTc1NTA63hmQ: 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg5MjQzZmFiNmEwODY3ZGI5ZGQ3YTkyMjQyZTZkYWE0NTgwMTMwNTk1MDVhNjQ34z8nYg==: 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjQ1YmU1ZjgxNzRkNzQ0NGYwOTZmMWVhYWYzNTc1NTA63hmQ: ]] 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjQ1YmU1ZjgxNzRkNzQ0NGYwOTZmMWVhYWYzNTc1NTA63hmQ: 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.792 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.359 nvme0n1 00:25:37.359 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.359 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.359 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.359 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.359 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.359 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.359 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.359 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.359 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.359 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.359 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.359 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.359 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:37.359 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.359 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:37.359 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:37.359 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:37.359 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWQ1ZWU2MjgzNjlhMmMyMjJhZDI5MmYwYjU3MzhiYTMzNmViYmMwNDg2ZDVjMjhmZGMwN2U4NDk3NmVjNDk2MYcacz8=: 00:25:37.359 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:37.359 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:37.359 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:37.359 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWQ1ZWU2MjgzNjlhMmMyMjJhZDI5MmYwYjU3MzhiYTMzNmViYmMwNDg2ZDVjMjhmZGMwN2U4NDk3NmVjNDk2MYcacz8=: 00:25:37.359 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:37.360 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:37.360 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.360 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:37.360 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:37.360 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:37.360 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.360 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:37.360 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.360 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.360 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.360 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.360 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.360 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.360 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.360 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.360 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.360 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.360 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.360 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.360 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.360 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.360 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:37.360 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.360 07:21:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.619 nvme0n1 00:25:37.619 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.619 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.619 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.619 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.619 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.619 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.619 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.619 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.879 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.879 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.879 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.879 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:37.879 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.879 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:37.879 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.879 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:37.879 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:37.879 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:37.879 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk0NmE0NGU4NzRjNWM3ODA3NTEwMjM0YTIzNTRlYmR1lp2A: 00:25:37.879 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTlhMDc1ZGQ5OGMzYTRkZTlmMzAyYzM2ZTUxMjgzNmI4NmQ3ZDZlMjQ1NGYzODYyYWFlNTE0MWFiYjg3Yjk0NtdECCg=: 00:25:37.879 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:37.879 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:37.879 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjk0NmE0NGU4NzRjNWM3ODA3NTEwMjM0YTIzNTRlYmR1lp2A: 00:25:37.879 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTlhMDc1ZGQ5OGMzYTRkZTlmMzAyYzM2ZTUxMjgzNmI4NmQ3ZDZlMjQ1NGYzODYyYWFlNTE0MWFiYjg3Yjk0NtdECCg=: ]] 00:25:37.879 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTlhMDc1ZGQ5OGMzYTRkZTlmMzAyYzM2ZTUxMjgzNmI4NmQ3ZDZlMjQ1NGYzODYyYWFlNTE0MWFiYjg3Yjk0NtdECCg=: 00:25:37.879 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:37.879 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.879 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:37.879 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:37.879 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:37.879 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.879 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:37.879 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.879 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.879 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.879 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.879 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.879 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.879 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.879 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.879 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.879 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.879 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.879 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.879 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.879 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.879 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:37.879 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.879 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.447 nvme0n1 00:25:38.447 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.447 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.447 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.447 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.447 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.447 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.447 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.447 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.447 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.447 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.447 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.447 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.447 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:38.447 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.448 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:38.448 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:38.448 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:38.448 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjY2UyZjViYTA3Yjg0MTc4NDAzZTk0YzgwNDkzMzVlZGZhZDBlNTY4NGI3YTJlfx3cbg==: 00:25:38.448 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: 00:25:38.448 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:38.448 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:38.448 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNjY2UyZjViYTA3Yjg0MTc4NDAzZTk0YzgwNDkzMzVlZGZhZDBlNTY4NGI3YTJlfx3cbg==: 00:25:38.448 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: ]] 00:25:38.448 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: 00:25:38.448 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:38.448 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.448 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:38.448 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:38.448 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:38.448 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.448 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:38.448 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.448 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.448 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.448 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.448 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.448 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.448 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.448 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.448 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.448 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.448 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.448 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.448 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.448 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.448 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:38.448 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.448 07:21:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.016 nvme0n1 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDYwMDY2YTFkNzY2ZGJjY2Y5NmNmYTllMzE5NzU3MDS3GpaZ: 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDYwMDY2YTFkNzY2ZGJjY2Y5NmNmYTllMzE5NzU3MDS3GpaZ: 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: ]] 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.017 07:21:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.585 nvme0n1 00:25:39.585 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.585 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.585 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.585 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.585 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.585 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.585 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.585 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.585 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.585 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.585 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.585 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.585 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:39.585 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.585 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:39.585 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:39.585 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:39.585 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg5MjQzZmFiNmEwODY3ZGI5ZGQ3YTkyMjQyZTZkYWE0NTgwMTMwNTk1MDVhNjQ34z8nYg==: 00:25:39.586 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjQ1YmU1ZjgxNzRkNzQ0NGYwOTZmMWVhYWYzNTc1NTA63hmQ: 00:25:39.586 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:39.586 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:39.586 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg5MjQzZmFiNmEwODY3ZGI5ZGQ3YTkyMjQyZTZkYWE0NTgwMTMwNTk1MDVhNjQ34z8nYg==: 00:25:39.586 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjQ1YmU1ZjgxNzRkNzQ0NGYwOTZmMWVhYWYzNTc1NTA63hmQ: ]] 00:25:39.586 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjQ1YmU1ZjgxNzRkNzQ0NGYwOTZmMWVhYWYzNTc1NTA63hmQ: 00:25:39.586 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:39.586 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.586 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:39.586 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:39.586 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:39.586 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.586 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:39.586 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.586 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.845 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.845 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.845 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.845 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.845 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.845 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.845 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.845 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.845 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.845 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.845 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.845 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.845 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:39.845 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.845 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.412 nvme0n1 00:25:40.412 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.412 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.412 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.412 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.412 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.413 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.413 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.413 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.413 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.413 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.413 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.413 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.413 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:40.413 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.413 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:40.413 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:40.413 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:40.413 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWQ1ZWU2MjgzNjlhMmMyMjJhZDI5MmYwYjU3MzhiYTMzNmViYmMwNDg2ZDVjMjhmZGMwN2U4NDk3NmVjNDk2MYcacz8=: 00:25:40.413 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:40.413 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:40.413 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:40.413 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWQ1ZWU2MjgzNjlhMmMyMjJhZDI5MmYwYjU3MzhiYTMzNmViYmMwNDg2ZDVjMjhmZGMwN2U4NDk3NmVjNDk2MYcacz8=: 00:25:40.413 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:40.413 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:40.413 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.413 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:40.413 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:40.413 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:40.413 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.413 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:40.413 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.413 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.413 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.413 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.413 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:40.413 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:40.413 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:40.413 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.413 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.413 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:40.413 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.413 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:40.413 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:40.413 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:40.413 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:40.413 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.413 07:21:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.981 nvme0n1 00:25:40.981 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.981 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.981 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.981 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.981 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.981 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.981 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.981 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.981 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.981 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.981 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.981 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:40.981 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:40.981 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.981 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:40.981 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.981 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:40.981 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:40.981 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:40.981 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk0NmE0NGU4NzRjNWM3ODA3NTEwMjM0YTIzNTRlYmR1lp2A: 00:25:40.981 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTlhMDc1ZGQ5OGMzYTRkZTlmMzAyYzM2ZTUxMjgzNmI4NmQ3ZDZlMjQ1NGYzODYyYWFlNTE0MWFiYjg3Yjk0NtdECCg=: 00:25:40.981 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:40.981 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:40.981 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjk0NmE0NGU4NzRjNWM3ODA3NTEwMjM0YTIzNTRlYmR1lp2A: 00:25:40.981 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTlhMDc1ZGQ5OGMzYTRkZTlmMzAyYzM2ZTUxMjgzNmI4NmQ3ZDZlMjQ1NGYzODYyYWFlNTE0MWFiYjg3Yjk0NtdECCg=: ]] 00:25:40.981 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTlhMDc1ZGQ5OGMzYTRkZTlmMzAyYzM2ZTUxMjgzNmI4NmQ3ZDZlMjQ1NGYzODYyYWFlNTE0MWFiYjg3Yjk0NtdECCg=: 00:25:40.981 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:40.981 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.981 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:40.981 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:40.981 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:40.981 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.981 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:40.981 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.981 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.981 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.981 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.981 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:40.981 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:40.982 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:40.982 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.982 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.982 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:40.982 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.982 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:40.982 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:40.982 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:40.982 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:40.982 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.982 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.241 nvme0n1 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjY2UyZjViYTA3Yjg0MTc4NDAzZTk0YzgwNDkzMzVlZGZhZDBlNTY4NGI3YTJlfx3cbg==: 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNjY2UyZjViYTA3Yjg0MTc4NDAzZTk0YzgwNDkzMzVlZGZhZDBlNTY4NGI3YTJlfx3cbg==: 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: ]] 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.241 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.501 nvme0n1 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDYwMDY2YTFkNzY2ZGJjY2Y5NmNmYTllMzE5NzU3MDS3GpaZ: 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDYwMDY2YTFkNzY2ZGJjY2Y5NmNmYTllMzE5NzU3MDS3GpaZ: 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: ]] 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.501 07:21:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.501 nvme0n1 00:25:41.501 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.501 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.501 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.501 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.501 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg5MjQzZmFiNmEwODY3ZGI5ZGQ3YTkyMjQyZTZkYWE0NTgwMTMwNTk1MDVhNjQ34z8nYg==: 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjQ1YmU1ZjgxNzRkNzQ0NGYwOTZmMWVhYWYzNTc1NTA63hmQ: 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg5MjQzZmFiNmEwODY3ZGI5ZGQ3YTkyMjQyZTZkYWE0NTgwMTMwNTk1MDVhNjQ34z8nYg==: 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjQ1YmU1ZjgxNzRkNzQ0NGYwOTZmMWVhYWYzNTc1NTA63hmQ: ]] 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjQ1YmU1ZjgxNzRkNzQ0NGYwOTZmMWVhYWYzNTc1NTA63hmQ: 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.761 nvme0n1 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.761 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.020 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWQ1ZWU2MjgzNjlhMmMyMjJhZDI5MmYwYjU3MzhiYTMzNmViYmMwNDg2ZDVjMjhmZGMwN2U4NDk3NmVjNDk2MYcacz8=: 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWQ1ZWU2MjgzNjlhMmMyMjJhZDI5MmYwYjU3MzhiYTMzNmViYmMwNDg2ZDVjMjhmZGMwN2U4NDk3NmVjNDk2MYcacz8=: 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.021 nvme0n1 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk0NmE0NGU4NzRjNWM3ODA3NTEwMjM0YTIzNTRlYmR1lp2A: 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTlhMDc1ZGQ5OGMzYTRkZTlmMzAyYzM2ZTUxMjgzNmI4NmQ3ZDZlMjQ1NGYzODYyYWFlNTE0MWFiYjg3Yjk0NtdECCg=: 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjk0NmE0NGU4NzRjNWM3ODA3NTEwMjM0YTIzNTRlYmR1lp2A: 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTlhMDc1ZGQ5OGMzYTRkZTlmMzAyYzM2ZTUxMjgzNmI4NmQ3ZDZlMjQ1NGYzODYyYWFlNTE0MWFiYjg3Yjk0NtdECCg=: ]] 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTlhMDc1ZGQ5OGMzYTRkZTlmMzAyYzM2ZTUxMjgzNmI4NmQ3ZDZlMjQ1NGYzODYyYWFlNTE0MWFiYjg3Yjk0NtdECCg=: 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.021 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.281 nvme0n1 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjY2UyZjViYTA3Yjg0MTc4NDAzZTk0YzgwNDkzMzVlZGZhZDBlNTY4NGI3YTJlfx3cbg==: 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNjY2UyZjViYTA3Yjg0MTc4NDAzZTk0YzgwNDkzMzVlZGZhZDBlNTY4NGI3YTJlfx3cbg==: 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: ]] 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.281 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.540 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.540 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.540 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.540 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.540 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.540 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.540 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.540 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.540 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.540 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.540 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.540 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.540 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:42.540 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.540 07:21:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.540 nvme0n1 00:25:42.540 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.540 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.540 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.540 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.540 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.540 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.540 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.540 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.540 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.540 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.540 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.540 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.540 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:42.541 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.541 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:42.541 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:42.541 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:42.541 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDYwMDY2YTFkNzY2ZGJjY2Y5NmNmYTllMzE5NzU3MDS3GpaZ: 00:25:42.541 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: 00:25:42.541 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:42.541 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:42.541 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDYwMDY2YTFkNzY2ZGJjY2Y5NmNmYTllMzE5NzU3MDS3GpaZ: 00:25:42.541 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: ]] 00:25:42.541 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: 00:25:42.541 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:42.541 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.541 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:42.541 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:42.541 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:42.541 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.541 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:42.541 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.541 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.541 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.541 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.541 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.541 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.541 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.541 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.541 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.541 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.541 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.541 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.800 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.800 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.800 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:42.800 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.800 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.800 nvme0n1 00:25:42.800 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.800 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.800 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.800 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.800 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.800 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.800 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.800 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.800 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.800 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.800 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.800 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.800 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:42.800 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.800 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:42.800 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:42.800 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:42.801 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg5MjQzZmFiNmEwODY3ZGI5ZGQ3YTkyMjQyZTZkYWE0NTgwMTMwNTk1MDVhNjQ34z8nYg==: 00:25:42.801 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjQ1YmU1ZjgxNzRkNzQ0NGYwOTZmMWVhYWYzNTc1NTA63hmQ: 00:25:42.801 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:42.801 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:42.801 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg5MjQzZmFiNmEwODY3ZGI5ZGQ3YTkyMjQyZTZkYWE0NTgwMTMwNTk1MDVhNjQ34z8nYg==: 00:25:42.801 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjQ1YmU1ZjgxNzRkNzQ0NGYwOTZmMWVhYWYzNTc1NTA63hmQ: ]] 00:25:42.801 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjQ1YmU1ZjgxNzRkNzQ0NGYwOTZmMWVhYWYzNTc1NTA63hmQ: 00:25:42.801 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:42.801 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.801 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:42.801 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:42.801 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:42.801 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.801 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:42.801 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.801 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.801 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.801 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.801 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.801 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.801 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.801 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.801 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.801 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.801 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.801 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.801 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.801 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.801 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:42.801 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.801 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.060 nvme0n1 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWQ1ZWU2MjgzNjlhMmMyMjJhZDI5MmYwYjU3MzhiYTMzNmViYmMwNDg2ZDVjMjhmZGMwN2U4NDk3NmVjNDk2MYcacz8=: 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWQ1ZWU2MjgzNjlhMmMyMjJhZDI5MmYwYjU3MzhiYTMzNmViYmMwNDg2ZDVjMjhmZGMwN2U4NDk3NmVjNDk2MYcacz8=: 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.060 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.320 nvme0n1 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk0NmE0NGU4NzRjNWM3ODA3NTEwMjM0YTIzNTRlYmR1lp2A: 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTlhMDc1ZGQ5OGMzYTRkZTlmMzAyYzM2ZTUxMjgzNmI4NmQ3ZDZlMjQ1NGYzODYyYWFlNTE0MWFiYjg3Yjk0NtdECCg=: 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjk0NmE0NGU4NzRjNWM3ODA3NTEwMjM0YTIzNTRlYmR1lp2A: 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTlhMDc1ZGQ5OGMzYTRkZTlmMzAyYzM2ZTUxMjgzNmI4NmQ3ZDZlMjQ1NGYzODYyYWFlNTE0MWFiYjg3Yjk0NtdECCg=: ]] 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTlhMDc1ZGQ5OGMzYTRkZTlmMzAyYzM2ZTUxMjgzNmI4NmQ3ZDZlMjQ1NGYzODYyYWFlNTE0MWFiYjg3Yjk0NtdECCg=: 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.320 07:21:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.579 nvme0n1 00:25:43.579 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.579 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.579 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.579 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.579 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.579 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.579 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.579 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.579 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.579 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.838 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.838 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.838 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:43.838 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.838 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:43.838 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:43.838 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:43.838 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjY2UyZjViYTA3Yjg0MTc4NDAzZTk0YzgwNDkzMzVlZGZhZDBlNTY4NGI3YTJlfx3cbg==: 00:25:43.838 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: 00:25:43.838 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:43.838 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:43.838 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNjY2UyZjViYTA3Yjg0MTc4NDAzZTk0YzgwNDkzMzVlZGZhZDBlNTY4NGI3YTJlfx3cbg==: 00:25:43.838 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: ]] 00:25:43.838 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: 00:25:43.838 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:43.838 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.838 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:43.838 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:43.838 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:43.838 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.838 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:43.838 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.838 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.838 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.838 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.838 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.838 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.838 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.838 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.838 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.838 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.838 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.838 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.838 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.838 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.838 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:43.838 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.838 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.097 nvme0n1 00:25:44.097 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.097 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.097 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.097 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.097 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.097 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.097 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.097 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.097 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.097 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.097 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.097 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.097 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:44.097 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.097 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:44.097 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:44.097 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:44.098 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDYwMDY2YTFkNzY2ZGJjY2Y5NmNmYTllMzE5NzU3MDS3GpaZ: 00:25:44.098 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: 00:25:44.098 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:44.098 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:44.098 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDYwMDY2YTFkNzY2ZGJjY2Y5NmNmYTllMzE5NzU3MDS3GpaZ: 00:25:44.098 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: ]] 00:25:44.098 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: 00:25:44.098 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:44.098 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.098 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:44.098 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:44.098 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:44.098 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.098 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:44.098 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.098 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.098 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.098 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.098 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:44.098 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:44.098 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:44.098 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.098 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.098 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:44.098 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.098 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:44.098 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:44.098 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:44.098 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:44.098 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.098 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.357 nvme0n1 00:25:44.357 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.357 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.357 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.357 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.357 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.357 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.357 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.357 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.357 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.357 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.357 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.357 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.357 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:44.357 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.357 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:44.357 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:44.357 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:44.357 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg5MjQzZmFiNmEwODY3ZGI5ZGQ3YTkyMjQyZTZkYWE0NTgwMTMwNTk1MDVhNjQ34z8nYg==: 00:25:44.357 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjQ1YmU1ZjgxNzRkNzQ0NGYwOTZmMWVhYWYzNTc1NTA63hmQ: 00:25:44.357 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:44.357 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:44.357 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg5MjQzZmFiNmEwODY3ZGI5ZGQ3YTkyMjQyZTZkYWE0NTgwMTMwNTk1MDVhNjQ34z8nYg==: 00:25:44.357 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjQ1YmU1ZjgxNzRkNzQ0NGYwOTZmMWVhYWYzNTc1NTA63hmQ: ]] 00:25:44.357 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjQ1YmU1ZjgxNzRkNzQ0NGYwOTZmMWVhYWYzNTc1NTA63hmQ: 00:25:44.357 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:44.357 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.357 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:44.357 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:44.357 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:44.357 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.357 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:44.357 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.357 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.357 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.357 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.357 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:44.357 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:44.357 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:44.357 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.357 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.357 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:44.357 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.357 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:44.357 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:44.357 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:44.358 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:44.358 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.358 07:21:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.616 nvme0n1 00:25:44.616 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.616 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.616 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.616 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.616 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.616 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.616 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.616 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.616 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.616 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.616 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.616 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.616 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:44.616 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.616 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:44.616 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:44.616 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:44.616 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWQ1ZWU2MjgzNjlhMmMyMjJhZDI5MmYwYjU3MzhiYTMzNmViYmMwNDg2ZDVjMjhmZGMwN2U4NDk3NmVjNDk2MYcacz8=: 00:25:44.616 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:44.616 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:44.617 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:44.617 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWQ1ZWU2MjgzNjlhMmMyMjJhZDI5MmYwYjU3MzhiYTMzNmViYmMwNDg2ZDVjMjhmZGMwN2U4NDk3NmVjNDk2MYcacz8=: 00:25:44.617 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:44.617 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:44.617 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.617 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:44.617 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:44.617 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:44.874 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.874 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:44.874 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.874 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.874 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.874 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.874 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:44.874 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:44.874 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:44.874 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.874 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.874 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:44.874 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.874 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:44.874 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:44.874 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:44.874 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:44.874 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.874 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.132 nvme0n1 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk0NmE0NGU4NzRjNWM3ODA3NTEwMjM0YTIzNTRlYmR1lp2A: 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTlhMDc1ZGQ5OGMzYTRkZTlmMzAyYzM2ZTUxMjgzNmI4NmQ3ZDZlMjQ1NGYzODYyYWFlNTE0MWFiYjg3Yjk0NtdECCg=: 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjk0NmE0NGU4NzRjNWM3ODA3NTEwMjM0YTIzNTRlYmR1lp2A: 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTlhMDc1ZGQ5OGMzYTRkZTlmMzAyYzM2ZTUxMjgzNmI4NmQ3ZDZlMjQ1NGYzODYyYWFlNTE0MWFiYjg3Yjk0NtdECCg=: ]] 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTlhMDc1ZGQ5OGMzYTRkZTlmMzAyYzM2ZTUxMjgzNmI4NmQ3ZDZlMjQ1NGYzODYyYWFlNTE0MWFiYjg3Yjk0NtdECCg=: 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.132 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.391 nvme0n1 00:25:45.391 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.391 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.391 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.391 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.391 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.391 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.391 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.391 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.391 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.391 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.649 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.649 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.649 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:45.649 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.649 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:45.649 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:45.649 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:45.649 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjY2UyZjViYTA3Yjg0MTc4NDAzZTk0YzgwNDkzMzVlZGZhZDBlNTY4NGI3YTJlfx3cbg==: 00:25:45.649 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: 00:25:45.649 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:45.649 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:45.649 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNjY2UyZjViYTA3Yjg0MTc4NDAzZTk0YzgwNDkzMzVlZGZhZDBlNTY4NGI3YTJlfx3cbg==: 00:25:45.649 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: ]] 00:25:45.649 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: 00:25:45.649 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:45.649 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.649 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:45.649 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:45.649 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:45.650 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.650 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:45.650 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.650 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.650 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.650 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.650 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:45.650 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:45.650 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:45.650 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.650 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.650 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:45.650 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.650 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:45.650 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:45.650 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:45.650 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:45.650 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.650 07:21:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.909 nvme0n1 00:25:45.909 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.909 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.909 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.909 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.909 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.909 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.909 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.909 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.909 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.909 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.909 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.909 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.909 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:45.909 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.909 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:45.909 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:45.909 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:45.909 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDYwMDY2YTFkNzY2ZGJjY2Y5NmNmYTllMzE5NzU3MDS3GpaZ: 00:25:45.909 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: 00:25:45.909 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:45.909 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:45.909 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDYwMDY2YTFkNzY2ZGJjY2Y5NmNmYTllMzE5NzU3MDS3GpaZ: 00:25:45.909 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: ]] 00:25:45.909 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: 00:25:45.909 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:45.909 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.909 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:45.909 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:45.909 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:45.909 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.909 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:45.909 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.909 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.909 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.909 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.909 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:45.910 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:45.910 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:45.910 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.910 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.910 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:45.910 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.910 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:45.910 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:45.910 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:45.910 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:45.910 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.910 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.479 nvme0n1 00:25:46.479 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.479 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.479 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.479 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.479 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.479 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.479 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.479 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.479 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.479 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.479 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.479 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.479 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:46.479 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.479 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:46.479 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:46.479 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:46.479 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg5MjQzZmFiNmEwODY3ZGI5ZGQ3YTkyMjQyZTZkYWE0NTgwMTMwNTk1MDVhNjQ34z8nYg==: 00:25:46.479 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjQ1YmU1ZjgxNzRkNzQ0NGYwOTZmMWVhYWYzNTc1NTA63hmQ: 00:25:46.479 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:46.479 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:46.479 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg5MjQzZmFiNmEwODY3ZGI5ZGQ3YTkyMjQyZTZkYWE0NTgwMTMwNTk1MDVhNjQ34z8nYg==: 00:25:46.479 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjQ1YmU1ZjgxNzRkNzQ0NGYwOTZmMWVhYWYzNTc1NTA63hmQ: ]] 00:25:46.479 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjQ1YmU1ZjgxNzRkNzQ0NGYwOTZmMWVhYWYzNTc1NTA63hmQ: 00:25:46.479 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:46.479 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.479 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:46.479 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:46.479 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:46.479 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.479 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:46.479 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.479 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.479 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.479 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.479 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:46.479 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:46.479 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:46.479 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.479 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.479 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:46.479 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.480 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:46.480 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:46.480 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:46.480 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:46.480 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.480 07:21:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.738 nvme0n1 00:25:46.738 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.738 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.738 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.738 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.738 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.738 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.738 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.739 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.739 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.739 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.997 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.997 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.997 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:46.997 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.997 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:46.997 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:46.997 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:46.997 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWQ1ZWU2MjgzNjlhMmMyMjJhZDI5MmYwYjU3MzhiYTMzNmViYmMwNDg2ZDVjMjhmZGMwN2U4NDk3NmVjNDk2MYcacz8=: 00:25:46.997 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:46.997 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:46.997 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:46.997 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWQ1ZWU2MjgzNjlhMmMyMjJhZDI5MmYwYjU3MzhiYTMzNmViYmMwNDg2ZDVjMjhmZGMwN2U4NDk3NmVjNDk2MYcacz8=: 00:25:46.997 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:46.997 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:46.997 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.998 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:46.998 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:46.998 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:46.998 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.998 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:46.998 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.998 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.998 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.998 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.998 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:46.998 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:46.998 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:46.998 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.998 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.998 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:46.998 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.998 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:46.998 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:46.998 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:46.998 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:46.998 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.998 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.257 nvme0n1 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mjk0NmE0NGU4NzRjNWM3ODA3NTEwMjM0YTIzNTRlYmR1lp2A: 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTlhMDc1ZGQ5OGMzYTRkZTlmMzAyYzM2ZTUxMjgzNmI4NmQ3ZDZlMjQ1NGYzODYyYWFlNTE0MWFiYjg3Yjk0NtdECCg=: 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mjk0NmE0NGU4NzRjNWM3ODA3NTEwMjM0YTIzNTRlYmR1lp2A: 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTlhMDc1ZGQ5OGMzYTRkZTlmMzAyYzM2ZTUxMjgzNmI4NmQ3ZDZlMjQ1NGYzODYyYWFlNTE0MWFiYjg3Yjk0NtdECCg=: ]] 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTlhMDc1ZGQ5OGMzYTRkZTlmMzAyYzM2ZTUxMjgzNmI4NmQ3ZDZlMjQ1NGYzODYyYWFlNTE0MWFiYjg3Yjk0NtdECCg=: 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.257 07:21:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.825 nvme0n1 00:25:47.825 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.825 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.825 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.825 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.825 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.825 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.085 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.085 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.085 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.085 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.085 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.085 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.085 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:48.085 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.085 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:48.085 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:48.085 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:48.085 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjY2UyZjViYTA3Yjg0MTc4NDAzZTk0YzgwNDkzMzVlZGZhZDBlNTY4NGI3YTJlfx3cbg==: 00:25:48.085 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: 00:25:48.085 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:48.085 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:48.085 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNjY2UyZjViYTA3Yjg0MTc4NDAzZTk0YzgwNDkzMzVlZGZhZDBlNTY4NGI3YTJlfx3cbg==: 00:25:48.085 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: ]] 00:25:48.085 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: 00:25:48.085 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:48.085 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.086 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:48.086 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:48.086 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:48.086 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.086 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:48.086 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.086 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.086 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.086 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.086 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:48.086 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:48.086 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:48.086 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.086 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.086 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:48.086 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.086 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:48.086 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:48.086 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:48.086 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:48.086 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.086 07:21:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.653 nvme0n1 00:25:48.653 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.653 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.653 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.654 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.654 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.654 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.654 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.654 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.654 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.654 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.654 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.654 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.654 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:48.654 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.654 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:48.654 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:48.654 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:48.654 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDYwMDY2YTFkNzY2ZGJjY2Y5NmNmYTllMzE5NzU3MDS3GpaZ: 00:25:48.654 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: 00:25:48.654 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:48.654 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:48.654 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDYwMDY2YTFkNzY2ZGJjY2Y5NmNmYTllMzE5NzU3MDS3GpaZ: 00:25:48.654 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: ]] 00:25:48.654 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: 00:25:48.654 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:48.654 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.654 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:48.654 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:48.654 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:48.654 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.654 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:48.654 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.654 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.654 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.654 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.654 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:48.654 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:48.654 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:48.654 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.654 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.654 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:48.654 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.654 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:48.654 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:48.654 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:48.654 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:48.654 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.654 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.223 nvme0n1 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg5MjQzZmFiNmEwODY3ZGI5ZGQ3YTkyMjQyZTZkYWE0NTgwMTMwNTk1MDVhNjQ34z8nYg==: 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjQ1YmU1ZjgxNzRkNzQ0NGYwOTZmMWVhYWYzNTc1NTA63hmQ: 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mjg5MjQzZmFiNmEwODY3ZGI5ZGQ3YTkyMjQyZTZkYWE0NTgwMTMwNTk1MDVhNjQ34z8nYg==: 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjQ1YmU1ZjgxNzRkNzQ0NGYwOTZmMWVhYWYzNTc1NTA63hmQ: ]] 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjQ1YmU1ZjgxNzRkNzQ0NGYwOTZmMWVhYWYzNTc1NTA63hmQ: 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.223 07:21:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.791 nvme0n1 00:25:49.791 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.791 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.791 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.791 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.791 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.791 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.051 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.051 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.051 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.051 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.051 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.051 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.051 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:50.051 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.051 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:50.051 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:50.051 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:50.051 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWQ1ZWU2MjgzNjlhMmMyMjJhZDI5MmYwYjU3MzhiYTMzNmViYmMwNDg2ZDVjMjhmZGMwN2U4NDk3NmVjNDk2MYcacz8=: 00:25:50.051 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:50.051 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:50.051 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:50.051 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWQ1ZWU2MjgzNjlhMmMyMjJhZDI5MmYwYjU3MzhiYTMzNmViYmMwNDg2ZDVjMjhmZGMwN2U4NDk3NmVjNDk2MYcacz8=: 00:25:50.051 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:50.051 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:50.051 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.051 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:50.051 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:50.051 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:50.051 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.051 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:50.051 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.051 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.051 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.051 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.051 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:50.051 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:50.051 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:50.051 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.051 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.051 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:50.051 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.051 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:50.051 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:50.051 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:50.051 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:50.051 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.051 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.619 nvme0n1 00:25:50.619 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.619 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.619 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.619 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.619 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.619 07:21:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.619 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.619 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.619 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.619 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.619 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.619 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:50.619 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.619 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjY2UyZjViYTA3Yjg0MTc4NDAzZTk0YzgwNDkzMzVlZGZhZDBlNTY4NGI3YTJlfx3cbg==: 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNjY2UyZjViYTA3Yjg0MTc4NDAzZTk0YzgwNDkzMzVlZGZhZDBlNTY4NGI3YTJlfx3cbg==: 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: ]] 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.620 request: 00:25:50.620 { 00:25:50.620 "name": "nvme0", 00:25:50.620 "trtype": "tcp", 00:25:50.620 "traddr": "10.0.0.1", 00:25:50.620 "adrfam": "ipv4", 00:25:50.620 "trsvcid": "4420", 00:25:50.620 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:50.620 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:50.620 "prchk_reftag": false, 00:25:50.620 "prchk_guard": false, 00:25:50.620 "hdgst": false, 00:25:50.620 "ddgst": false, 00:25:50.620 "allow_unrecognized_csi": false, 00:25:50.620 "method": "bdev_nvme_attach_controller", 00:25:50.620 "req_id": 1 00:25:50.620 } 00:25:50.620 Got JSON-RPC error response 00:25:50.620 response: 00:25:50.620 { 00:25:50.620 "code": -5, 00:25:50.620 "message": "Input/output error" 00:25:50.620 } 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.620 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.880 request: 00:25:50.880 { 00:25:50.880 "name": "nvme0", 00:25:50.880 "trtype": "tcp", 00:25:50.880 "traddr": "10.0.0.1", 00:25:50.880 "adrfam": "ipv4", 00:25:50.880 "trsvcid": "4420", 00:25:50.880 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:50.880 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:50.880 "prchk_reftag": false, 00:25:50.880 "prchk_guard": false, 00:25:50.880 "hdgst": false, 00:25:50.880 "ddgst": false, 00:25:50.880 "dhchap_key": "key2", 00:25:50.880 "allow_unrecognized_csi": false, 00:25:50.880 "method": "bdev_nvme_attach_controller", 00:25:50.880 "req_id": 1 00:25:50.880 } 00:25:50.880 Got JSON-RPC error response 00:25:50.880 response: 00:25:50.880 { 00:25:50.880 "code": -5, 00:25:50.880 "message": "Input/output error" 00:25:50.880 } 00:25:50.880 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:50.880 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:50.880 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:50.880 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:50.880 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:50.880 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.880 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:50.880 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.880 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.880 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.880 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:50.880 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:50.880 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:50.880 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:50.880 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:50.880 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.880 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.880 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:50.880 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.880 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:50.880 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:50.880 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:50.880 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:50.880 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:50.880 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:50.880 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:50.880 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:50.880 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:50.880 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:50.880 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:50.880 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.880 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.880 request: 00:25:50.880 { 00:25:50.880 "name": "nvme0", 00:25:50.880 "trtype": "tcp", 00:25:50.881 "traddr": "10.0.0.1", 00:25:50.881 "adrfam": "ipv4", 00:25:50.881 "trsvcid": "4420", 00:25:50.881 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:50.881 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:50.881 "prchk_reftag": false, 00:25:50.881 "prchk_guard": false, 00:25:50.881 "hdgst": false, 00:25:50.881 "ddgst": false, 00:25:50.881 "dhchap_key": "key1", 00:25:50.881 "dhchap_ctrlr_key": "ckey2", 00:25:50.881 "allow_unrecognized_csi": false, 00:25:50.881 "method": "bdev_nvme_attach_controller", 00:25:50.881 "req_id": 1 00:25:50.881 } 00:25:50.881 Got JSON-RPC error response 00:25:50.881 response: 00:25:50.881 { 00:25:50.881 "code": -5, 00:25:50.881 "message": "Input/output error" 00:25:50.881 } 00:25:50.881 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:50.881 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:50.881 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:50.881 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:50.881 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:50.881 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:25:50.881 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:50.881 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:50.881 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:50.881 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.881 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.881 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:50.881 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.881 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:50.881 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:50.881 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:50.881 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:50.881 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.881 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.141 nvme0n1 00:25:51.141 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.141 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:51.141 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.141 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:51.141 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:51.141 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:51.141 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDYwMDY2YTFkNzY2ZGJjY2Y5NmNmYTllMzE5NzU3MDS3GpaZ: 00:25:51.141 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: 00:25:51.141 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:51.141 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:51.141 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDYwMDY2YTFkNzY2ZGJjY2Y5NmNmYTllMzE5NzU3MDS3GpaZ: 00:25:51.141 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: ]] 00:25:51.141 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: 00:25:51.141 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:51.141 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.141 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.141 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.141 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:25:51.141 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.141 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.141 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.141 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.141 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.141 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:51.141 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:51.141 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:51.141 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:51.141 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:51.141 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:51.141 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:51.141 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:51.141 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.141 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.141 request: 00:25:51.141 { 00:25:51.141 "name": "nvme0", 00:25:51.141 "dhchap_key": "key1", 00:25:51.141 "dhchap_ctrlr_key": "ckey2", 00:25:51.141 "method": "bdev_nvme_set_keys", 00:25:51.141 "req_id": 1 00:25:51.141 } 00:25:51.141 Got JSON-RPC error response 00:25:51.141 response: 00:25:51.141 { 00:25:51.141 "code": -13, 00:25:51.141 "message": "Permission denied" 00:25:51.141 } 00:25:51.141 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:51.141 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:51.141 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:51.141 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:51.141 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:51.141 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.141 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:51.141 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.141 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.141 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.400 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:51.400 07:21:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:52.336 07:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.336 07:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:52.336 07:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.336 07:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.336 07:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.336 07:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:52.336 07:21:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:53.274 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.274 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:53.274 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.274 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.274 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.274 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:25:53.274 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:53.274 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.274 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:53.274 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:53.274 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:53.274 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDNjY2UyZjViYTA3Yjg0MTc4NDAzZTk0YzgwNDkzMzVlZGZhZDBlNTY4NGI3YTJlfx3cbg==: 00:25:53.274 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: 00:25:53.274 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:53.274 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:53.274 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDNjY2UyZjViYTA3Yjg0MTc4NDAzZTk0YzgwNDkzMzVlZGZhZDBlNTY4NGI3YTJlfx3cbg==: 00:25:53.274 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: ]] 00:25:53.274 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmVkNzUxOTJhMjgzNjliZjUyNjM3YjRiZGY0ZWUxMTZhMGJiNmRkMzNlMjFkNmZmLaVF+g==: 00:25:53.274 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:25:53.274 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.274 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.274 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.274 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.274 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.274 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.274 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.274 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.274 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.274 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.274 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:53.274 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.274 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.533 nvme0n1 00:25:53.533 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.533 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:53.533 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.533 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:53.533 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:53.533 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:53.533 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDYwMDY2YTFkNzY2ZGJjY2Y5NmNmYTllMzE5NzU3MDS3GpaZ: 00:25:53.533 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: 00:25:53.533 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:53.534 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:53.534 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDYwMDY2YTFkNzY2ZGJjY2Y5NmNmYTllMzE5NzU3MDS3GpaZ: 00:25:53.534 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: ]] 00:25:53.534 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTY4NzQwYWRlMDRhMDhiYWFhMDRiMjhkZTY4MjBmN2MvBSOz: 00:25:53.534 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:53.534 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:53.534 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:53.534 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:53.534 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:53.534 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:53.534 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:53.534 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:53.534 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.534 07:21:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.534 request: 00:25:53.534 { 00:25:53.534 "name": "nvme0", 00:25:53.534 "dhchap_key": "key2", 00:25:53.534 "dhchap_ctrlr_key": "ckey1", 00:25:53.534 "method": "bdev_nvme_set_keys", 00:25:53.534 "req_id": 1 00:25:53.534 } 00:25:53.534 Got JSON-RPC error response 00:25:53.534 response: 00:25:53.534 { 00:25:53.534 "code": -13, 00:25:53.534 "message": "Permission denied" 00:25:53.534 } 00:25:53.534 07:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:53.534 07:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:53.534 07:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:53.534 07:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:53.534 07:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:53.534 07:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.534 07:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:53.534 07:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.534 07:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.534 07:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.534 07:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:25:53.534 07:21:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:25:54.912 07:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.912 07:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:54.912 07:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.912 07:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.912 07:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.912 07:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:25:54.912 07:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:25:54.912 07:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:25:54.912 07:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:54.912 07:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:54.912 07:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:25:54.912 07:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:54.912 07:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:25:54.912 07:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:54.912 07:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:54.912 rmmod nvme_tcp 00:25:54.912 rmmod nvme_fabrics 00:25:54.912 07:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:54.912 07:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:25:54.912 07:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:25:54.912 07:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 1322531 ']' 00:25:54.912 07:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 1322531 00:25:54.912 07:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 1322531 ']' 00:25:54.912 07:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 1322531 00:25:54.912 07:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:25:54.912 07:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:54.912 07:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1322531 00:25:54.912 07:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:54.912 07:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:54.912 07:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1322531' 00:25:54.912 killing process with pid 1322531 00:25:54.912 07:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 1322531 00:25:54.912 07:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 1322531 00:25:54.912 07:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:54.912 07:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:54.912 07:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:54.912 07:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:25:54.912 07:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:25:54.912 07:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:54.912 07:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:54.912 07:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:54.912 07:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:54.912 07:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:54.912 07:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:54.912 07:21:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.449 07:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:57.449 07:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:57.449 07:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:57.449 07:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:57.449 07:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:57.449 07:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:25:57.449 07:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:57.449 07:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:57.449 07:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:57.449 07:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:57.449 07:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:57.449 07:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:57.449 07:22:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:59.984 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:59.984 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:59.984 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:59.984 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:59.984 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:59.984 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:59.984 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:59.984 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:59.984 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:59.984 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:59.984 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:59.984 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:59.984 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:59.984 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:59.984 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:59.984 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:00.922 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:00.922 07:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Skf /tmp/spdk.key-null.y2a /tmp/spdk.key-sha256.aSz /tmp/spdk.key-sha384.CN7 /tmp/spdk.key-sha512.GMF /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:26:00.922 07:22:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:04.323 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:26:04.323 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:04.323 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:26:04.323 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:26:04.323 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:26:04.323 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:26:04.323 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:26:04.323 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:26:04.323 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:26:04.323 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:26:04.323 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:26:04.323 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:26:04.323 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:26:04.323 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:26:04.323 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:26:04.323 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:26:04.323 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:26:04.323 00:26:04.323 real 0m54.200s 00:26:04.323 user 0m48.734s 00:26:04.323 sys 0m12.743s 00:26:04.323 07:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:04.323 07:22:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.323 ************************************ 00:26:04.323 END TEST nvmf_auth_host 00:26:04.323 ************************************ 00:26:04.323 07:22:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:26:04.323 07:22:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:04.323 07:22:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:04.323 07:22:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:04.323 07:22:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.323 ************************************ 00:26:04.323 START TEST nvmf_digest 00:26:04.323 ************************************ 00:26:04.323 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:04.323 * Looking for test storage... 00:26:04.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:04.323 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:04.323 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:26:04.323 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:04.323 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:04.323 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:04.323 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:04.323 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:04.323 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:26:04.323 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:04.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.324 --rc genhtml_branch_coverage=1 00:26:04.324 --rc genhtml_function_coverage=1 00:26:04.324 --rc genhtml_legend=1 00:26:04.324 --rc geninfo_all_blocks=1 00:26:04.324 --rc geninfo_unexecuted_blocks=1 00:26:04.324 00:26:04.324 ' 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:04.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.324 --rc genhtml_branch_coverage=1 00:26:04.324 --rc genhtml_function_coverage=1 00:26:04.324 --rc genhtml_legend=1 00:26:04.324 --rc geninfo_all_blocks=1 00:26:04.324 --rc geninfo_unexecuted_blocks=1 00:26:04.324 00:26:04.324 ' 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:04.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.324 --rc genhtml_branch_coverage=1 00:26:04.324 --rc genhtml_function_coverage=1 00:26:04.324 --rc genhtml_legend=1 00:26:04.324 --rc geninfo_all_blocks=1 00:26:04.324 --rc geninfo_unexecuted_blocks=1 00:26:04.324 00:26:04.324 ' 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:04.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.324 --rc genhtml_branch_coverage=1 00:26:04.324 --rc genhtml_function_coverage=1 00:26:04.324 --rc genhtml_legend=1 00:26:04.324 --rc geninfo_all_blocks=1 00:26:04.324 --rc geninfo_unexecuted_blocks=1 00:26:04.324 00:26:04.324 ' 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:04.324 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:04.324 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:04.325 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:04.325 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:04.325 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:04.325 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:04.325 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:04.325 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:04.325 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:04.325 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:04.325 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:04.325 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:04.325 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:04.325 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:04.325 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:04.325 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:04.325 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:04.325 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:04.325 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:26:04.325 07:22:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:10.895 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:10.895 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:26:10.895 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:10.895 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:10.895 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:10.895 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:10.895 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:10.895 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:26:10.895 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:10.895 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:26:10.895 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:26:10.895 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:26:10.895 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:26:10.895 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:26:10.895 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:10.896 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:10.896 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:10.896 Found net devices under 0000:86:00.0: cvl_0_0 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:10.896 Found net devices under 0000:86:00.1: cvl_0_1 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:10.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:10.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.423 ms 00:26:10.896 00:26:10.896 --- 10.0.0.2 ping statistics --- 00:26:10.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:10.896 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:10.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:10.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:26:10.896 00:26:10.896 --- 10.0.0.1 ping statistics --- 00:26:10.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:10.896 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:10.896 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:10.897 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:10.897 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:10.897 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:10.897 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:10.897 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:10.897 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:10.897 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:10.897 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:10.897 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:26:10.897 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:10.897 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:10.897 ************************************ 00:26:10.897 START TEST nvmf_digest_clean 00:26:10.897 ************************************ 00:26:10.897 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:26:10.897 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:10.897 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:10.897 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:10.897 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:10.897 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:10.897 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:10.897 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:10.897 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:10.897 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=1336484 00:26:10.897 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 1336484 00:26:10.897 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:10.897 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 1336484 ']' 00:26:10.897 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:10.897 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:10.897 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:10.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:10.897 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:10.897 07:22:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:10.897 [2024-11-20 07:22:14.623773] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:26:10.897 [2024-11-20 07:22:14.623813] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:10.897 [2024-11-20 07:22:14.701147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.897 [2024-11-20 07:22:14.742965] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:10.897 [2024-11-20 07:22:14.743001] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:10.897 [2024-11-20 07:22:14.743010] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:10.897 [2024-11-20 07:22:14.743016] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:10.897 [2024-11-20 07:22:14.743021] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:10.897 [2024-11-20 07:22:14.743583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.156 07:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:11.156 07:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:26:11.156 07:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:11.156 07:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:11.156 07:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:11.156 07:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:11.156 07:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:11.157 07:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:11.157 07:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:11.157 07:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.157 07:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:11.157 null0 00:26:11.157 [2024-11-20 07:22:15.575363] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:11.157 [2024-11-20 07:22:15.599546] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:11.157 07:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.157 07:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:11.157 07:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:11.157 07:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:11.157 07:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:11.157 07:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:11.157 07:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:11.157 07:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:11.157 07:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1336551 00:26:11.157 07:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1336551 /var/tmp/bperf.sock 00:26:11.157 07:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:11.157 07:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 1336551 ']' 00:26:11.157 07:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:11.157 07:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:11.157 07:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:11.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:11.157 07:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:11.157 07:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:11.157 [2024-11-20 07:22:15.654136] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:26:11.157 [2024-11-20 07:22:15.654178] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1336551 ] 00:26:11.416 [2024-11-20 07:22:15.731021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.416 [2024-11-20 07:22:15.774563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:11.416 07:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:11.416 07:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:26:11.416 07:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:11.416 07:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:11.416 07:22:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:11.675 07:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:11.675 07:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:11.934 nvme0n1 00:26:11.934 07:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:11.934 07:22:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:11.934 Running I/O for 2 seconds... 00:26:14.248 25620.00 IOPS, 100.08 MiB/s [2024-11-20T06:22:18.804Z] 24900.00 IOPS, 97.27 MiB/s 00:26:14.248 Latency(us) 00:26:14.248 [2024-11-20T06:22:18.804Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:14.248 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:14.248 nvme0n1 : 2.01 24892.45 97.24 0.00 0.00 5137.22 2578.70 11910.46 00:26:14.248 [2024-11-20T06:22:18.804Z] =================================================================================================================== 00:26:14.248 [2024-11-20T06:22:18.804Z] Total : 24892.45 97.24 0.00 0.00 5137.22 2578.70 11910.46 00:26:14.248 { 00:26:14.248 "results": [ 00:26:14.248 { 00:26:14.248 "job": "nvme0n1", 00:26:14.248 "core_mask": "0x2", 00:26:14.248 "workload": "randread", 00:26:14.248 "status": "finished", 00:26:14.248 "queue_depth": 128, 00:26:14.248 "io_size": 4096, 00:26:14.248 "runtime": 2.007115, 00:26:14.248 "iops": 24892.445126462608, 00:26:14.248 "mibps": 97.23611377524456, 00:26:14.248 "io_failed": 0, 00:26:14.248 "io_timeout": 0, 00:26:14.248 "avg_latency_us": 5137.219064645652, 00:26:14.248 "min_latency_us": 2578.6991304347825, 00:26:14.248 "max_latency_us": 11910.455652173912 00:26:14.248 } 00:26:14.248 ], 00:26:14.248 "core_count": 1 00:26:14.248 } 00:26:14.248 07:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:14.248 07:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:14.248 07:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:14.248 07:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:14.248 | select(.opcode=="crc32c") 00:26:14.248 | "\(.module_name) \(.executed)"' 00:26:14.248 07:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:14.248 07:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:14.248 07:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:14.248 07:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:14.248 07:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:14.248 07:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1336551 00:26:14.248 07:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 1336551 ']' 00:26:14.248 07:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 1336551 00:26:14.248 07:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:26:14.248 07:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:14.248 07:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1336551 00:26:14.248 07:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:14.248 07:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:14.248 07:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1336551' 00:26:14.248 killing process with pid 1336551 00:26:14.248 07:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 1336551 00:26:14.248 Received shutdown signal, test time was about 2.000000 seconds 00:26:14.248 00:26:14.248 Latency(us) 00:26:14.248 [2024-11-20T06:22:18.805Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:14.249 [2024-11-20T06:22:18.805Z] =================================================================================================================== 00:26:14.249 [2024-11-20T06:22:18.805Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:14.249 07:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 1336551 00:26:14.508 07:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:14.508 07:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:14.508 07:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:14.508 07:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:14.508 07:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:14.508 07:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:14.508 07:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:14.509 07:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1337201 00:26:14.509 07:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1337201 /var/tmp/bperf.sock 00:26:14.509 07:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:14.509 07:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 1337201 ']' 00:26:14.509 07:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:14.509 07:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:14.509 07:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:14.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:14.509 07:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:14.509 07:22:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:14.509 [2024-11-20 07:22:18.914054] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:26:14.509 [2024-11-20 07:22:18.914102] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1337201 ] 00:26:14.509 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:14.509 Zero copy mechanism will not be used. 00:26:14.509 [2024-11-20 07:22:18.988053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.509 [2024-11-20 07:22:19.031147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:14.770 07:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:14.770 07:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:26:14.770 07:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:14.770 07:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:14.770 07:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:15.028 07:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:15.028 07:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:15.287 nvme0n1 00:26:15.287 07:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:15.287 07:22:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:15.287 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:15.287 Zero copy mechanism will not be used. 00:26:15.287 Running I/O for 2 seconds... 00:26:17.601 5293.00 IOPS, 661.62 MiB/s [2024-11-20T06:22:22.157Z] 5627.00 IOPS, 703.38 MiB/s 00:26:17.601 Latency(us) 00:26:17.601 [2024-11-20T06:22:22.157Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.601 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:17.601 nvme0n1 : 2.00 5625.28 703.16 0.00 0.00 2841.42 669.61 11169.61 00:26:17.601 [2024-11-20T06:22:22.157Z] =================================================================================================================== 00:26:17.601 [2024-11-20T06:22:22.157Z] Total : 5625.28 703.16 0.00 0.00 2841.42 669.61 11169.61 00:26:17.601 { 00:26:17.601 "results": [ 00:26:17.601 { 00:26:17.601 "job": "nvme0n1", 00:26:17.601 "core_mask": "0x2", 00:26:17.601 "workload": "randread", 00:26:17.601 "status": "finished", 00:26:17.601 "queue_depth": 16, 00:26:17.601 "io_size": 131072, 00:26:17.601 "runtime": 2.003456, 00:26:17.601 "iops": 5625.279516994633, 00:26:17.601 "mibps": 703.1599396243291, 00:26:17.601 "io_failed": 0, 00:26:17.601 "io_timeout": 0, 00:26:17.601 "avg_latency_us": 2841.415947841518, 00:26:17.601 "min_latency_us": 669.6069565217391, 00:26:17.601 "max_latency_us": 11169.613913043479 00:26:17.601 } 00:26:17.601 ], 00:26:17.601 "core_count": 1 00:26:17.601 } 00:26:17.601 07:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:17.601 07:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:17.601 07:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:17.601 07:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:17.601 | select(.opcode=="crc32c") 00:26:17.601 | "\(.module_name) \(.executed)"' 00:26:17.601 07:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:17.601 07:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:17.601 07:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:17.601 07:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:17.601 07:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:17.601 07:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1337201 00:26:17.601 07:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 1337201 ']' 00:26:17.601 07:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 1337201 00:26:17.601 07:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:26:17.601 07:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:17.601 07:22:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1337201 00:26:17.601 07:22:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:17.601 07:22:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:17.601 07:22:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1337201' 00:26:17.601 killing process with pid 1337201 00:26:17.601 07:22:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 1337201 00:26:17.601 Received shutdown signal, test time was about 2.000000 seconds 00:26:17.601 00:26:17.601 Latency(us) 00:26:17.601 [2024-11-20T06:22:22.157Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.601 [2024-11-20T06:22:22.157Z] =================================================================================================================== 00:26:17.601 [2024-11-20T06:22:22.157Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:17.601 07:22:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 1337201 00:26:17.861 07:22:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:17.861 07:22:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:17.861 07:22:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:17.861 07:22:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:17.861 07:22:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:17.861 07:22:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:17.861 07:22:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:17.861 07:22:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1337672 00:26:17.861 07:22:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1337672 /var/tmp/bperf.sock 00:26:17.861 07:22:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:17.861 07:22:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 1337672 ']' 00:26:17.861 07:22:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:17.861 07:22:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:17.861 07:22:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:17.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:17.861 07:22:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:17.861 07:22:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:17.861 [2024-11-20 07:22:22.213439] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:26:17.861 [2024-11-20 07:22:22.213486] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1337672 ] 00:26:17.861 [2024-11-20 07:22:22.286349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.861 [2024-11-20 07:22:22.327889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:17.861 07:22:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:17.861 07:22:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:26:17.861 07:22:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:17.861 07:22:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:17.861 07:22:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:18.120 07:22:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:18.120 07:22:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:18.379 nvme0n1 00:26:18.379 07:22:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:18.379 07:22:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:18.638 Running I/O for 2 seconds... 00:26:20.510 27340.00 IOPS, 106.80 MiB/s [2024-11-20T06:22:25.066Z] 27600.00 IOPS, 107.81 MiB/s 00:26:20.510 Latency(us) 00:26:20.510 [2024-11-20T06:22:25.066Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.510 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:20.510 nvme0n1 : 2.01 27615.27 107.87 0.00 0.00 4628.75 1837.86 15044.79 00:26:20.510 [2024-11-20T06:22:25.066Z] =================================================================================================================== 00:26:20.510 [2024-11-20T06:22:25.066Z] Total : 27615.27 107.87 0.00 0.00 4628.75 1837.86 15044.79 00:26:20.510 { 00:26:20.510 "results": [ 00:26:20.510 { 00:26:20.510 "job": "nvme0n1", 00:26:20.510 "core_mask": "0x2", 00:26:20.510 "workload": "randwrite", 00:26:20.510 "status": "finished", 00:26:20.510 "queue_depth": 128, 00:26:20.510 "io_size": 4096, 00:26:20.510 "runtime": 2.005883, 00:26:20.510 "iops": 27615.269684223855, 00:26:20.510 "mibps": 107.87214720399943, 00:26:20.510 "io_failed": 0, 00:26:20.510 "io_timeout": 0, 00:26:20.510 "avg_latency_us": 4628.753947783388, 00:26:20.510 "min_latency_us": 1837.8573913043479, 00:26:20.510 "max_latency_us": 15044.786086956521 00:26:20.510 } 00:26:20.510 ], 00:26:20.510 "core_count": 1 00:26:20.510 } 00:26:20.511 07:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:20.511 07:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:20.511 07:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:20.511 07:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:20.511 | select(.opcode=="crc32c") 00:26:20.511 | "\(.module_name) \(.executed)"' 00:26:20.511 07:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:20.771 07:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:20.771 07:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:20.771 07:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:20.771 07:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:20.771 07:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1337672 00:26:20.771 07:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 1337672 ']' 00:26:20.771 07:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 1337672 00:26:20.771 07:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:26:20.771 07:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:20.771 07:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1337672 00:26:20.771 07:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:20.771 07:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:20.771 07:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1337672' 00:26:20.771 killing process with pid 1337672 00:26:20.771 07:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 1337672 00:26:20.771 Received shutdown signal, test time was about 2.000000 seconds 00:26:20.771 00:26:20.771 Latency(us) 00:26:20.771 [2024-11-20T06:22:25.327Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.771 [2024-11-20T06:22:25.327Z] =================================================================================================================== 00:26:20.771 [2024-11-20T06:22:25.327Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:20.771 07:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 1337672 00:26:21.030 07:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:21.030 07:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:21.030 07:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:21.030 07:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:21.030 07:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:21.030 07:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:21.030 07:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:21.030 07:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1338146 00:26:21.030 07:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1338146 /var/tmp/bperf.sock 00:26:21.030 07:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:21.030 07:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 1338146 ']' 00:26:21.030 07:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:21.030 07:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:21.030 07:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:21.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:21.030 07:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:21.030 07:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:21.030 [2024-11-20 07:22:25.485465] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:26:21.030 [2024-11-20 07:22:25.485520] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1338146 ] 00:26:21.030 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:21.030 Zero copy mechanism will not be used. 00:26:21.030 [2024-11-20 07:22:25.559470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.290 [2024-11-20 07:22:25.598924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:21.290 07:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:21.290 07:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:26:21.290 07:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:21.290 07:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:21.290 07:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:21.548 07:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:21.548 07:22:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:21.807 nvme0n1 00:26:21.807 07:22:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:21.807 07:22:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:21.807 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:21.807 Zero copy mechanism will not be used. 00:26:21.807 Running I/O for 2 seconds... 00:26:24.122 6449.00 IOPS, 806.12 MiB/s [2024-11-20T06:22:28.678Z] 6753.00 IOPS, 844.12 MiB/s 00:26:24.122 Latency(us) 00:26:24.122 [2024-11-20T06:22:28.678Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.122 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:24.122 nvme0n1 : 2.00 6749.06 843.63 0.00 0.00 2366.25 1688.26 5869.75 00:26:24.122 [2024-11-20T06:22:28.678Z] =================================================================================================================== 00:26:24.122 [2024-11-20T06:22:28.678Z] Total : 6749.06 843.63 0.00 0.00 2366.25 1688.26 5869.75 00:26:24.122 { 00:26:24.122 "results": [ 00:26:24.122 { 00:26:24.122 "job": "nvme0n1", 00:26:24.122 "core_mask": "0x2", 00:26:24.122 "workload": "randwrite", 00:26:24.122 "status": "finished", 00:26:24.122 "queue_depth": 16, 00:26:24.122 "io_size": 131072, 00:26:24.122 "runtime": 2.00339, 00:26:24.122 "iops": 6749.060342719091, 00:26:24.122 "mibps": 843.6325428398864, 00:26:24.122 "io_failed": 0, 00:26:24.122 "io_timeout": 0, 00:26:24.122 "avg_latency_us": 2366.2528897077977, 00:26:24.122 "min_latency_us": 1688.264347826087, 00:26:24.122 "max_latency_us": 5869.746086956522 00:26:24.122 } 00:26:24.122 ], 00:26:24.122 "core_count": 1 00:26:24.122 } 00:26:24.122 07:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:24.122 07:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:24.122 07:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:24.122 07:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:24.122 | select(.opcode=="crc32c") 00:26:24.122 | "\(.module_name) \(.executed)"' 00:26:24.122 07:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:24.122 07:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:24.122 07:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:24.122 07:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:24.122 07:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:24.122 07:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1338146 00:26:24.122 07:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 1338146 ']' 00:26:24.122 07:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 1338146 00:26:24.122 07:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:26:24.122 07:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:24.122 07:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1338146 00:26:24.122 07:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:24.122 07:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:24.122 07:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1338146' 00:26:24.122 killing process with pid 1338146 00:26:24.122 07:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 1338146 00:26:24.122 Received shutdown signal, test time was about 2.000000 seconds 00:26:24.122 00:26:24.122 Latency(us) 00:26:24.123 [2024-11-20T06:22:28.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.123 [2024-11-20T06:22:28.679Z] =================================================================================================================== 00:26:24.123 [2024-11-20T06:22:28.679Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:24.123 07:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 1338146 00:26:24.382 07:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1336484 00:26:24.382 07:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 1336484 ']' 00:26:24.382 07:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 1336484 00:26:24.382 07:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:26:24.382 07:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:24.382 07:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1336484 00:26:24.382 07:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:24.382 07:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:24.382 07:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1336484' 00:26:24.382 killing process with pid 1336484 00:26:24.382 07:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 1336484 00:26:24.382 07:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 1336484 00:26:24.641 00:26:24.641 real 0m14.414s 00:26:24.641 user 0m27.104s 00:26:24.641 sys 0m4.615s 00:26:24.641 07:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:24.641 07:22:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:24.641 ************************************ 00:26:24.641 END TEST nvmf_digest_clean 00:26:24.641 ************************************ 00:26:24.641 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:24.641 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:26:24.641 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:24.641 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:24.641 ************************************ 00:26:24.641 START TEST nvmf_digest_error 00:26:24.641 ************************************ 00:26:24.641 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:26:24.641 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:24.641 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:24.641 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:24.641 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:24.641 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=1338861 00:26:24.641 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 1338861 00:26:24.641 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:24.641 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 1338861 ']' 00:26:24.641 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:24.641 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:24.641 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:24.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:24.641 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:24.641 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:24.641 [2024-11-20 07:22:29.109086] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:26:24.641 [2024-11-20 07:22:29.109130] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:24.641 [2024-11-20 07:22:29.171067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.899 [2024-11-20 07:22:29.214175] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:24.899 [2024-11-20 07:22:29.214207] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:24.899 [2024-11-20 07:22:29.214213] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:24.899 [2024-11-20 07:22:29.214219] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:24.899 [2024-11-20 07:22:29.214224] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:24.899 [2024-11-20 07:22:29.214789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:24.899 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:24.899 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:26:24.899 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:24.899 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:24.899 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:24.899 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:24.900 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:24.900 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.900 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:24.900 [2024-11-20 07:22:29.323325] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:24.900 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.900 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:24.900 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:24.900 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.900 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:24.900 null0 00:26:24.900 [2024-11-20 07:22:29.415286] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:24.900 [2024-11-20 07:22:29.439491] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:24.900 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.900 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:24.900 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:24.900 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:24.900 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:24.900 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:24.900 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1338885 00:26:24.900 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1338885 /var/tmp/bperf.sock 00:26:24.900 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:24.900 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 1338885 ']' 00:26:24.900 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:24.900 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:24.900 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:24.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:24.900 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:24.900 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:25.159 [2024-11-20 07:22:29.492842] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:26:25.159 [2024-11-20 07:22:29.492889] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1338885 ] 00:26:25.159 [2024-11-20 07:22:29.568888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.159 [2024-11-20 07:22:29.612436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:25.159 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:25.159 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:26:25.159 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:25.159 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:25.417 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:25.417 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.417 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:25.417 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.417 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:25.417 07:22:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:25.676 nvme0n1 00:26:25.676 07:22:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:25.676 07:22:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.676 07:22:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:25.676 07:22:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.676 07:22:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:25.676 07:22:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:25.934 Running I/O for 2 seconds... 00:26:25.934 [2024-11-20 07:22:30.302466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:25.934 [2024-11-20 07:22:30.302500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.934 [2024-11-20 07:22:30.302511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.934 [2024-11-20 07:22:30.312516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:25.934 [2024-11-20 07:22:30.312541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.934 [2024-11-20 07:22:30.312556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.934 [2024-11-20 07:22:30.324104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:25.934 [2024-11-20 07:22:30.324126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:25142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.934 [2024-11-20 07:22:30.324135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.934 [2024-11-20 07:22:30.332753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:25.935 [2024-11-20 07:22:30.332778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.935 [2024-11-20 07:22:30.332787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.935 [2024-11-20 07:22:30.344278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:25.935 [2024-11-20 07:22:30.344299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.935 [2024-11-20 07:22:30.344307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.935 [2024-11-20 07:22:30.356207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:25.935 [2024-11-20 07:22:30.356228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.935 [2024-11-20 07:22:30.356236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.935 [2024-11-20 07:22:30.364912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:25.935 [2024-11-20 07:22:30.364934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.935 [2024-11-20 07:22:30.364943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.935 [2024-11-20 07:22:30.375667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:25.935 [2024-11-20 07:22:30.375689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.935 [2024-11-20 07:22:30.375699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.935 [2024-11-20 07:22:30.387836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:25.935 [2024-11-20 07:22:30.387856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.935 [2024-11-20 07:22:30.387865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.935 [2024-11-20 07:22:30.399195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:25.935 [2024-11-20 07:22:30.399216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.935 [2024-11-20 07:22:30.399225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.935 [2024-11-20 07:22:30.408181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:25.935 [2024-11-20 07:22:30.408202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.935 [2024-11-20 07:22:30.408210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.935 [2024-11-20 07:22:30.420688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:25.935 [2024-11-20 07:22:30.420709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.935 [2024-11-20 07:22:30.420719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.935 [2024-11-20 07:22:30.429189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:25.935 [2024-11-20 07:22:30.429209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.935 [2024-11-20 07:22:30.429217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.935 [2024-11-20 07:22:30.441408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:25.935 [2024-11-20 07:22:30.441429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.935 [2024-11-20 07:22:30.441437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.935 [2024-11-20 07:22:30.454229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:25.935 [2024-11-20 07:22:30.454250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.935 [2024-11-20 07:22:30.454258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.935 [2024-11-20 07:22:30.466217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:25.935 [2024-11-20 07:22:30.466238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.935 [2024-11-20 07:22:30.466246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.935 [2024-11-20 07:22:30.479018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:25.935 [2024-11-20 07:22:30.479038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.935 [2024-11-20 07:22:30.479046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.194 [2024-11-20 07:22:30.488454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.194 [2024-11-20 07:22:30.488475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.194 [2024-11-20 07:22:30.488483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.194 [2024-11-20 07:22:30.500902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.194 [2024-11-20 07:22:30.500922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.194 [2024-11-20 07:22:30.500931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.194 [2024-11-20 07:22:30.510441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.194 [2024-11-20 07:22:30.510462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.194 [2024-11-20 07:22:30.510470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.194 [2024-11-20 07:22:30.519648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.194 [2024-11-20 07:22:30.519669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.194 [2024-11-20 07:22:30.519680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.194 [2024-11-20 07:22:30.532014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.194 [2024-11-20 07:22:30.532034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.194 [2024-11-20 07:22:30.532042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.194 [2024-11-20 07:22:30.543795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.194 [2024-11-20 07:22:30.543815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.194 [2024-11-20 07:22:30.543823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.194 [2024-11-20 07:22:30.556422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.194 [2024-11-20 07:22:30.556442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.194 [2024-11-20 07:22:30.556450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.194 [2024-11-20 07:22:30.566768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.194 [2024-11-20 07:22:30.566788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.194 [2024-11-20 07:22:30.566797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.194 [2024-11-20 07:22:30.578197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.194 [2024-11-20 07:22:30.578218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.194 [2024-11-20 07:22:30.578227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.194 [2024-11-20 07:22:30.586407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.194 [2024-11-20 07:22:30.586427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.194 [2024-11-20 07:22:30.586436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.194 [2024-11-20 07:22:30.596760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.194 [2024-11-20 07:22:30.596780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.194 [2024-11-20 07:22:30.596789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.194 [2024-11-20 07:22:30.606011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.194 [2024-11-20 07:22:30.606031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.194 [2024-11-20 07:22:30.606040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.194 [2024-11-20 07:22:30.616021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.194 [2024-11-20 07:22:30.616041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.194 [2024-11-20 07:22:30.616049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.194 [2024-11-20 07:22:30.625830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.194 [2024-11-20 07:22:30.625850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.194 [2024-11-20 07:22:30.625858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.194 [2024-11-20 07:22:30.635934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.194 [2024-11-20 07:22:30.635957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.194 [2024-11-20 07:22:30.635966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.194 [2024-11-20 07:22:30.646137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.194 [2024-11-20 07:22:30.646157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.194 [2024-11-20 07:22:30.646165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.194 [2024-11-20 07:22:30.655404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.194 [2024-11-20 07:22:30.655424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.194 [2024-11-20 07:22:30.655432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.194 [2024-11-20 07:22:30.664506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.195 [2024-11-20 07:22:30.664526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.195 [2024-11-20 07:22:30.664534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.195 [2024-11-20 07:22:30.673748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.195 [2024-11-20 07:22:30.673768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.195 [2024-11-20 07:22:30.673776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.195 [2024-11-20 07:22:30.686249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.195 [2024-11-20 07:22:30.686269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.195 [2024-11-20 07:22:30.686278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.195 [2024-11-20 07:22:30.694786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.195 [2024-11-20 07:22:30.694806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.195 [2024-11-20 07:22:30.694821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.195 [2024-11-20 07:22:30.705690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.195 [2024-11-20 07:22:30.705711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.195 [2024-11-20 07:22:30.705719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.195 [2024-11-20 07:22:30.718313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.195 [2024-11-20 07:22:30.718334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.195 [2024-11-20 07:22:30.718343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.195 [2024-11-20 07:22:30.729742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.195 [2024-11-20 07:22:30.729762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.195 [2024-11-20 07:22:30.729770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.195 [2024-11-20 07:22:30.738480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.195 [2024-11-20 07:22:30.738500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.195 [2024-11-20 07:22:30.738508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.453 [2024-11-20 07:22:30.751618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.453 [2024-11-20 07:22:30.751638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.453 [2024-11-20 07:22:30.751646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.453 [2024-11-20 07:22:30.763326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.453 [2024-11-20 07:22:30.763346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.453 [2024-11-20 07:22:30.763354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.453 [2024-11-20 07:22:30.775854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.453 [2024-11-20 07:22:30.775875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.453 [2024-11-20 07:22:30.775884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.453 [2024-11-20 07:22:30.785311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.453 [2024-11-20 07:22:30.785332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.453 [2024-11-20 07:22:30.785340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.454 [2024-11-20 07:22:30.797358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.454 [2024-11-20 07:22:30.797382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.454 [2024-11-20 07:22:30.797391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.454 [2024-11-20 07:22:30.810261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.454 [2024-11-20 07:22:30.810280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.454 [2024-11-20 07:22:30.810288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.454 [2024-11-20 07:22:30.821990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.454 [2024-11-20 07:22:30.822010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.454 [2024-11-20 07:22:30.822018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.454 [2024-11-20 07:22:30.831050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.454 [2024-11-20 07:22:30.831070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.454 [2024-11-20 07:22:30.831078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.454 [2024-11-20 07:22:30.843363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.454 [2024-11-20 07:22:30.843383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.454 [2024-11-20 07:22:30.843391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.454 [2024-11-20 07:22:30.855634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.454 [2024-11-20 07:22:30.855654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.454 [2024-11-20 07:22:30.855662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.454 [2024-11-20 07:22:30.865380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.454 [2024-11-20 07:22:30.865399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.454 [2024-11-20 07:22:30.865407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.454 [2024-11-20 07:22:30.875283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.454 [2024-11-20 07:22:30.875303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.454 [2024-11-20 07:22:30.875311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.454 [2024-11-20 07:22:30.885181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.454 [2024-11-20 07:22:30.885201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.454 [2024-11-20 07:22:30.885209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.454 [2024-11-20 07:22:30.895345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.454 [2024-11-20 07:22:30.895365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.454 [2024-11-20 07:22:30.895373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.454 [2024-11-20 07:22:30.903989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.454 [2024-11-20 07:22:30.904010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.454 [2024-11-20 07:22:30.904018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.454 [2024-11-20 07:22:30.913182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.454 [2024-11-20 07:22:30.913202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.454 [2024-11-20 07:22:30.913210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.454 [2024-11-20 07:22:30.923564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.454 [2024-11-20 07:22:30.923585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.454 [2024-11-20 07:22:30.923593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.454 [2024-11-20 07:22:30.932656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.454 [2024-11-20 07:22:30.932677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.454 [2024-11-20 07:22:30.932685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.454 [2024-11-20 07:22:30.941904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.454 [2024-11-20 07:22:30.941924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.454 [2024-11-20 07:22:30.941932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.454 [2024-11-20 07:22:30.952051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.454 [2024-11-20 07:22:30.952070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.454 [2024-11-20 07:22:30.952078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.454 [2024-11-20 07:22:30.960846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.454 [2024-11-20 07:22:30.960865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.454 [2024-11-20 07:22:30.960873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.454 [2024-11-20 07:22:30.970501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.454 [2024-11-20 07:22:30.970522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.454 [2024-11-20 07:22:30.970534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.454 [2024-11-20 07:22:30.981307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.454 [2024-11-20 07:22:30.981327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.454 [2024-11-20 07:22:30.981335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.455 [2024-11-20 07:22:30.989745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.455 [2024-11-20 07:22:30.989765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.455 [2024-11-20 07:22:30.989773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.455 [2024-11-20 07:22:31.000696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.455 [2024-11-20 07:22:31.000717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.455 [2024-11-20 07:22:31.000725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.713 [2024-11-20 07:22:31.012322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.713 [2024-11-20 07:22:31.012342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.713 [2024-11-20 07:22:31.012351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.713 [2024-11-20 07:22:31.021267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.713 [2024-11-20 07:22:31.021287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.713 [2024-11-20 07:22:31.021295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.713 [2024-11-20 07:22:31.034060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.713 [2024-11-20 07:22:31.034081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.713 [2024-11-20 07:22:31.034089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.713 [2024-11-20 07:22:31.046572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.713 [2024-11-20 07:22:31.046592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.713 [2024-11-20 07:22:31.046600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.713 [2024-11-20 07:22:31.059463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.713 [2024-11-20 07:22:31.059484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.713 [2024-11-20 07:22:31.059492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.713 [2024-11-20 07:22:31.072004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.713 [2024-11-20 07:22:31.072024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.713 [2024-11-20 07:22:31.072032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.713 [2024-11-20 07:22:31.080882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.713 [2024-11-20 07:22:31.080902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.713 [2024-11-20 07:22:31.080911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.713 [2024-11-20 07:22:31.092855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.713 [2024-11-20 07:22:31.092876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.713 [2024-11-20 07:22:31.092885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.713 [2024-11-20 07:22:31.105504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.713 [2024-11-20 07:22:31.105525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.713 [2024-11-20 07:22:31.105533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.713 [2024-11-20 07:22:31.115269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.713 [2024-11-20 07:22:31.115289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.713 [2024-11-20 07:22:31.115297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.713 [2024-11-20 07:22:31.123779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.713 [2024-11-20 07:22:31.123799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.713 [2024-11-20 07:22:31.123807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.713 [2024-11-20 07:22:31.134445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.713 [2024-11-20 07:22:31.134465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.713 [2024-11-20 07:22:31.134473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.713 [2024-11-20 07:22:31.144198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.713 [2024-11-20 07:22:31.144218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.713 [2024-11-20 07:22:31.144226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.713 [2024-11-20 07:22:31.154077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.713 [2024-11-20 07:22:31.154097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.713 [2024-11-20 07:22:31.154109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.713 [2024-11-20 07:22:31.163371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.713 [2024-11-20 07:22:31.163390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.713 [2024-11-20 07:22:31.163399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.713 [2024-11-20 07:22:31.172063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.713 [2024-11-20 07:22:31.172083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.713 [2024-11-20 07:22:31.172091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.713 [2024-11-20 07:22:31.182692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.713 [2024-11-20 07:22:31.182712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.713 [2024-11-20 07:22:31.182721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.713 [2024-11-20 07:22:31.194634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.713 [2024-11-20 07:22:31.194654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.714 [2024-11-20 07:22:31.194663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.714 [2024-11-20 07:22:31.204984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.714 [2024-11-20 07:22:31.205004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.714 [2024-11-20 07:22:31.205012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.714 [2024-11-20 07:22:31.213322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.714 [2024-11-20 07:22:31.213342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.714 [2024-11-20 07:22:31.213350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.714 [2024-11-20 07:22:31.225768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.714 [2024-11-20 07:22:31.225789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.714 [2024-11-20 07:22:31.225797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.714 [2024-11-20 07:22:31.238251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.714 [2024-11-20 07:22:31.238271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.714 [2024-11-20 07:22:31.238279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.714 [2024-11-20 07:22:31.251384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.714 [2024-11-20 07:22:31.251407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.714 [2024-11-20 07:22:31.251415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.714 [2024-11-20 07:22:31.259681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.714 [2024-11-20 07:22:31.259700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.714 [2024-11-20 07:22:31.259709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.971 [2024-11-20 07:22:31.271789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.971 [2024-11-20 07:22:31.271810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.971 [2024-11-20 07:22:31.271818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.971 [2024-11-20 07:22:31.283387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.971 [2024-11-20 07:22:31.283407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.971 [2024-11-20 07:22:31.283416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.971 23932.00 IOPS, 93.48 MiB/s [2024-11-20T06:22:31.527Z] [2024-11-20 07:22:31.293324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.971 [2024-11-20 07:22:31.293344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.971 [2024-11-20 07:22:31.293353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.971 [2024-11-20 07:22:31.303181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.971 [2024-11-20 07:22:31.303201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.971 [2024-11-20 07:22:31.303210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.971 [2024-11-20 07:22:31.312796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.971 [2024-11-20 07:22:31.312816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.971 [2024-11-20 07:22:31.312824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.971 [2024-11-20 07:22:31.322806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.971 [2024-11-20 07:22:31.322827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.971 [2024-11-20 07:22:31.322836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.971 [2024-11-20 07:22:31.332285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.971 [2024-11-20 07:22:31.332306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.971 [2024-11-20 07:22:31.332314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.971 [2024-11-20 07:22:31.344473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.971 [2024-11-20 07:22:31.344494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.971 [2024-11-20 07:22:31.344503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.971 [2024-11-20 07:22:31.355258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.971 [2024-11-20 07:22:31.355278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.971 [2024-11-20 07:22:31.355287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.971 [2024-11-20 07:22:31.363534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.972 [2024-11-20 07:22:31.363554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.972 [2024-11-20 07:22:31.363562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.972 [2024-11-20 07:22:31.374728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.972 [2024-11-20 07:22:31.374748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.972 [2024-11-20 07:22:31.374756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.972 [2024-11-20 07:22:31.383988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.972 [2024-11-20 07:22:31.384008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.972 [2024-11-20 07:22:31.384016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.972 [2024-11-20 07:22:31.393030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.972 [2024-11-20 07:22:31.393050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.972 [2024-11-20 07:22:31.393058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.972 [2024-11-20 07:22:31.402067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.972 [2024-11-20 07:22:31.402088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.972 [2024-11-20 07:22:31.402097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.972 [2024-11-20 07:22:31.413962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.972 [2024-11-20 07:22:31.413991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:16208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.972 [2024-11-20 07:22:31.414001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.972 [2024-11-20 07:22:31.423580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.972 [2024-11-20 07:22:31.423600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.972 [2024-11-20 07:22:31.423611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.972 [2024-11-20 07:22:31.433323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.972 [2024-11-20 07:22:31.433343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.972 [2024-11-20 07:22:31.433352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.972 [2024-11-20 07:22:31.443337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.972 [2024-11-20 07:22:31.443358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.972 [2024-11-20 07:22:31.443366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.972 [2024-11-20 07:22:31.452594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.972 [2024-11-20 07:22:31.452614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.972 [2024-11-20 07:22:31.452622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.972 [2024-11-20 07:22:31.462595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.972 [2024-11-20 07:22:31.462617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.972 [2024-11-20 07:22:31.462625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.972 [2024-11-20 07:22:31.473063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.972 [2024-11-20 07:22:31.473083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.972 [2024-11-20 07:22:31.473091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.972 [2024-11-20 07:22:31.481403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.972 [2024-11-20 07:22:31.481424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.972 [2024-11-20 07:22:31.481432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.972 [2024-11-20 07:22:31.492211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.972 [2024-11-20 07:22:31.492232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.972 [2024-11-20 07:22:31.492241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.972 [2024-11-20 07:22:31.500400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.972 [2024-11-20 07:22:31.500421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.972 [2024-11-20 07:22:31.500429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.972 [2024-11-20 07:22:31.510878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.972 [2024-11-20 07:22:31.510899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.972 [2024-11-20 07:22:31.510907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.972 [2024-11-20 07:22:31.520389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:26.972 [2024-11-20 07:22:31.520410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.972 [2024-11-20 07:22:31.520418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.230 [2024-11-20 07:22:31.529719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.230 [2024-11-20 07:22:31.529739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.230 [2024-11-20 07:22:31.529747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.230 [2024-11-20 07:22:31.539590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.230 [2024-11-20 07:22:31.539611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.231 [2024-11-20 07:22:31.539619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.231 [2024-11-20 07:22:31.548571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.231 [2024-11-20 07:22:31.548592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.231 [2024-11-20 07:22:31.548600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.231 [2024-11-20 07:22:31.559140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.231 [2024-11-20 07:22:31.559160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.231 [2024-11-20 07:22:31.559168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.231 [2024-11-20 07:22:31.570253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.231 [2024-11-20 07:22:31.570273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.231 [2024-11-20 07:22:31.570281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.231 [2024-11-20 07:22:31.578352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.231 [2024-11-20 07:22:31.578372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.231 [2024-11-20 07:22:31.578380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.231 [2024-11-20 07:22:31.589010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.231 [2024-11-20 07:22:31.589032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.231 [2024-11-20 07:22:31.589044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.231 [2024-11-20 07:22:31.601075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.231 [2024-11-20 07:22:31.601096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.231 [2024-11-20 07:22:31.601104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.231 [2024-11-20 07:22:31.610914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.231 [2024-11-20 07:22:31.610935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.231 [2024-11-20 07:22:31.610943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.231 [2024-11-20 07:22:31.619540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.231 [2024-11-20 07:22:31.619560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.231 [2024-11-20 07:22:31.619568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.231 [2024-11-20 07:22:31.628985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.231 [2024-11-20 07:22:31.629006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.231 [2024-11-20 07:22:31.629015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.231 [2024-11-20 07:22:31.640362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.231 [2024-11-20 07:22:31.640382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.231 [2024-11-20 07:22:31.640390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.231 [2024-11-20 07:22:31.651639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.231 [2024-11-20 07:22:31.651666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.231 [2024-11-20 07:22:31.651674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.231 [2024-11-20 07:22:31.660138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.231 [2024-11-20 07:22:31.660158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.231 [2024-11-20 07:22:31.660166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.231 [2024-11-20 07:22:31.672853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.231 [2024-11-20 07:22:31.672874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.231 [2024-11-20 07:22:31.672882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.231 [2024-11-20 07:22:31.683674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.231 [2024-11-20 07:22:31.683703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.231 [2024-11-20 07:22:31.683712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.231 [2024-11-20 07:22:31.694069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.231 [2024-11-20 07:22:31.694090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.231 [2024-11-20 07:22:31.694098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.231 [2024-11-20 07:22:31.703191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.231 [2024-11-20 07:22:31.703222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.231 [2024-11-20 07:22:31.703230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.231 [2024-11-20 07:22:31.714792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.231 [2024-11-20 07:22:31.714813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.231 [2024-11-20 07:22:31.714821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.231 [2024-11-20 07:22:31.726579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.231 [2024-11-20 07:22:31.726601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:25073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.231 [2024-11-20 07:22:31.726609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.231 [2024-11-20 07:22:31.737775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.231 [2024-11-20 07:22:31.737795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.232 [2024-11-20 07:22:31.737803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.232 [2024-11-20 07:22:31.746531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.232 [2024-11-20 07:22:31.746551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.232 [2024-11-20 07:22:31.746559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.232 [2024-11-20 07:22:31.759004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.232 [2024-11-20 07:22:31.759024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.232 [2024-11-20 07:22:31.759032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.232 [2024-11-20 07:22:31.771730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.232 [2024-11-20 07:22:31.771750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.232 [2024-11-20 07:22:31.771758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.489 [2024-11-20 07:22:31.784300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.489 [2024-11-20 07:22:31.784321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.489 [2024-11-20 07:22:31.784330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.489 [2024-11-20 07:22:31.794669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.489 [2024-11-20 07:22:31.794689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.489 [2024-11-20 07:22:31.794697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.489 [2024-11-20 07:22:31.803131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.489 [2024-11-20 07:22:31.803151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.489 [2024-11-20 07:22:31.803159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.489 [2024-11-20 07:22:31.813787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.489 [2024-11-20 07:22:31.813808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.489 [2024-11-20 07:22:31.813816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.490 [2024-11-20 07:22:31.825236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.490 [2024-11-20 07:22:31.825257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.490 [2024-11-20 07:22:31.825265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.490 [2024-11-20 07:22:31.835488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.490 [2024-11-20 07:22:31.835506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.490 [2024-11-20 07:22:31.835515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.490 [2024-11-20 07:22:31.844167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.490 [2024-11-20 07:22:31.844186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.490 [2024-11-20 07:22:31.844194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.490 [2024-11-20 07:22:31.856592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.490 [2024-11-20 07:22:31.856612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.490 [2024-11-20 07:22:31.856620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.490 [2024-11-20 07:22:31.867856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.490 [2024-11-20 07:22:31.867875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.490 [2024-11-20 07:22:31.867887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.490 [2024-11-20 07:22:31.876334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.490 [2024-11-20 07:22:31.876354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.490 [2024-11-20 07:22:31.876362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.490 [2024-11-20 07:22:31.888760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.490 [2024-11-20 07:22:31.888780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.490 [2024-11-20 07:22:31.888788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.490 [2024-11-20 07:22:31.898484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.490 [2024-11-20 07:22:31.898504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.490 [2024-11-20 07:22:31.898512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.490 [2024-11-20 07:22:31.906493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.490 [2024-11-20 07:22:31.906513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.490 [2024-11-20 07:22:31.906521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.490 [2024-11-20 07:22:31.915868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.490 [2024-11-20 07:22:31.915887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.490 [2024-11-20 07:22:31.915896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.490 [2024-11-20 07:22:31.926253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.490 [2024-11-20 07:22:31.926274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.490 [2024-11-20 07:22:31.926282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.490 [2024-11-20 07:22:31.934958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.490 [2024-11-20 07:22:31.934979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.490 [2024-11-20 07:22:31.934987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.490 [2024-11-20 07:22:31.945757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.490 [2024-11-20 07:22:31.945779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.490 [2024-11-20 07:22:31.945787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.490 [2024-11-20 07:22:31.954046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.490 [2024-11-20 07:22:31.954066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.490 [2024-11-20 07:22:31.954076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.490 [2024-11-20 07:22:31.966616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.490 [2024-11-20 07:22:31.966636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.490 [2024-11-20 07:22:31.966644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.490 [2024-11-20 07:22:31.979407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.490 [2024-11-20 07:22:31.979427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.490 [2024-11-20 07:22:31.979435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.490 [2024-11-20 07:22:31.989190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.490 [2024-11-20 07:22:31.989210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.490 [2024-11-20 07:22:31.989218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.490 [2024-11-20 07:22:31.998281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.490 [2024-11-20 07:22:31.998301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.490 [2024-11-20 07:22:31.998309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.490 [2024-11-20 07:22:32.007771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.490 [2024-11-20 07:22:32.007791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.490 [2024-11-20 07:22:32.007799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.490 [2024-11-20 07:22:32.016906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.490 [2024-11-20 07:22:32.016925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.490 [2024-11-20 07:22:32.016932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.490 [2024-11-20 07:22:32.026513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.490 [2024-11-20 07:22:32.026533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.490 [2024-11-20 07:22:32.026540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.490 [2024-11-20 07:22:32.036125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.490 [2024-11-20 07:22:32.036144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.490 [2024-11-20 07:22:32.036159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.748 [2024-11-20 07:22:32.046912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.748 [2024-11-20 07:22:32.046932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.748 [2024-11-20 07:22:32.046940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.748 [2024-11-20 07:22:32.055556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.748 [2024-11-20 07:22:32.055576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.748 [2024-11-20 07:22:32.055584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.748 [2024-11-20 07:22:32.065939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.748 [2024-11-20 07:22:32.065965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.748 [2024-11-20 07:22:32.065974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.748 [2024-11-20 07:22:32.077606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.748 [2024-11-20 07:22:32.077626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.748 [2024-11-20 07:22:32.077634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.748 [2024-11-20 07:22:32.087608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.748 [2024-11-20 07:22:32.087628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.748 [2024-11-20 07:22:32.087636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.748 [2024-11-20 07:22:32.097232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.748 [2024-11-20 07:22:32.097252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.748 [2024-11-20 07:22:32.097260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.748 [2024-11-20 07:22:32.107232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.748 [2024-11-20 07:22:32.107251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.748 [2024-11-20 07:22:32.107260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.748 [2024-11-20 07:22:32.116054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.748 [2024-11-20 07:22:32.116075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.748 [2024-11-20 07:22:32.116084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.748 [2024-11-20 07:22:32.128767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.748 [2024-11-20 07:22:32.128791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.748 [2024-11-20 07:22:32.128799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.748 [2024-11-20 07:22:32.138609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.748 [2024-11-20 07:22:32.138630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.748 [2024-11-20 07:22:32.138638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.748 [2024-11-20 07:22:32.149628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.748 [2024-11-20 07:22:32.149649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.748 [2024-11-20 07:22:32.149657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.748 [2024-11-20 07:22:32.157835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.748 [2024-11-20 07:22:32.157855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.748 [2024-11-20 07:22:32.157862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.748 [2024-11-20 07:22:32.167400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.748 [2024-11-20 07:22:32.167420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.748 [2024-11-20 07:22:32.167428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.748 [2024-11-20 07:22:32.177534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.748 [2024-11-20 07:22:32.177554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.748 [2024-11-20 07:22:32.177562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.748 [2024-11-20 07:22:32.186984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.748 [2024-11-20 07:22:32.187005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.748 [2024-11-20 07:22:32.187013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.748 [2024-11-20 07:22:32.195893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.748 [2024-11-20 07:22:32.195912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.748 [2024-11-20 07:22:32.195920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.748 [2024-11-20 07:22:32.205152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.748 [2024-11-20 07:22:32.205172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.748 [2024-11-20 07:22:32.205180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.748 [2024-11-20 07:22:32.215142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.749 [2024-11-20 07:22:32.215162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.749 [2024-11-20 07:22:32.215170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.749 [2024-11-20 07:22:32.227082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.749 [2024-11-20 07:22:32.227102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.749 [2024-11-20 07:22:32.227110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.749 [2024-11-20 07:22:32.235049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.749 [2024-11-20 07:22:32.235068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.749 [2024-11-20 07:22:32.235076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.749 [2024-11-20 07:22:32.246794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.749 [2024-11-20 07:22:32.246814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.749 [2024-11-20 07:22:32.246822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.749 [2024-11-20 07:22:32.257034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.749 [2024-11-20 07:22:32.257054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.749 [2024-11-20 07:22:32.257062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.749 [2024-11-20 07:22:32.266657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.749 [2024-11-20 07:22:32.266677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.749 [2024-11-20 07:22:32.266685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.749 [2024-11-20 07:22:32.275818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.749 [2024-11-20 07:22:32.275838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.749 [2024-11-20 07:22:32.275847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.749 [2024-11-20 07:22:32.285104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.749 [2024-11-20 07:22:32.285124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.749 [2024-11-20 07:22:32.285132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.749 24574.50 IOPS, 95.99 MiB/s [2024-11-20T06:22:32.305Z] [2024-11-20 07:22:32.296556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1883880) 00:26:27.749 [2024-11-20 07:22:32.296578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.749 [2024-11-20 07:22:32.296589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.006 00:26:28.006 Latency(us) 00:26:28.007 [2024-11-20T06:22:32.563Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:28.007 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:28.007 nvme0n1 : 2.00 24592.24 96.06 0.00 0.00 5199.67 2478.97 17324.30 00:26:28.007 [2024-11-20T06:22:32.563Z] =================================================================================================================== 00:26:28.007 [2024-11-20T06:22:32.563Z] Total : 24592.24 96.06 0.00 0.00 5199.67 2478.97 17324.30 00:26:28.007 { 00:26:28.007 "results": [ 00:26:28.007 { 00:26:28.007 "job": "nvme0n1", 00:26:28.007 "core_mask": "0x2", 00:26:28.007 "workload": "randread", 00:26:28.007 "status": "finished", 00:26:28.007 "queue_depth": 128, 00:26:28.007 "io_size": 4096, 00:26:28.007 "runtime": 2.00425, 00:26:28.007 "iops": 24592.241486840463, 00:26:28.007 "mibps": 96.06344330797056, 00:26:28.007 "io_failed": 0, 00:26:28.007 "io_timeout": 0, 00:26:28.007 "avg_latency_us": 5199.668847039687, 00:26:28.007 "min_latency_us": 2478.9704347826087, 00:26:28.007 "max_latency_us": 17324.29913043478 00:26:28.007 } 00:26:28.007 ], 00:26:28.007 "core_count": 1 00:26:28.007 } 00:26:28.007 07:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:28.007 07:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:28.007 07:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:28.007 | .driver_specific 00:26:28.007 | .nvme_error 00:26:28.007 | .status_code 00:26:28.007 | .command_transient_transport_error' 00:26:28.007 07:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:28.007 07:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 193 > 0 )) 00:26:28.007 07:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1338885 00:26:28.007 07:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 1338885 ']' 00:26:28.007 07:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 1338885 00:26:28.007 07:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:26:28.007 07:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:28.007 07:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1338885 00:26:28.265 07:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:28.265 07:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:28.265 07:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1338885' 00:26:28.265 killing process with pid 1338885 00:26:28.265 07:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 1338885 00:26:28.265 Received shutdown signal, test time was about 2.000000 seconds 00:26:28.265 00:26:28.265 Latency(us) 00:26:28.265 [2024-11-20T06:22:32.821Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:28.265 [2024-11-20T06:22:32.821Z] =================================================================================================================== 00:26:28.265 [2024-11-20T06:22:32.821Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:28.265 07:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 1338885 00:26:28.265 07:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:28.265 07:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:28.265 07:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:28.265 07:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:28.265 07:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:28.265 07:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1339363 00:26:28.265 07:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1339363 /var/tmp/bperf.sock 00:26:28.265 07:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:28.265 07:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 1339363 ']' 00:26:28.265 07:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:28.265 07:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:28.265 07:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:28.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:28.265 07:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:28.265 07:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:28.265 [2024-11-20 07:22:32.780914] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:26:28.265 [2024-11-20 07:22:32.780973] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1339363 ] 00:26:28.265 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:28.265 Zero copy mechanism will not be used. 00:26:28.523 [2024-11-20 07:22:32.856632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.523 [2024-11-20 07:22:32.900058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.523 07:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:28.523 07:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:26:28.523 07:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:28.523 07:22:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:28.781 07:22:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:28.781 07:22:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.781 07:22:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:28.781 07:22:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.781 07:22:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:28.781 07:22:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:29.039 nvme0n1 00:26:29.039 07:22:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:29.039 07:22:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.039 07:22:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:29.039 07:22:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.039 07:22:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:29.039 07:22:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:29.298 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:29.298 Zero copy mechanism will not be used. 00:26:29.298 Running I/O for 2 seconds... 00:26:29.298 [2024-11-20 07:22:33.679694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.298 [2024-11-20 07:22:33.679729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.298 [2024-11-20 07:22:33.679740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.298 [2024-11-20 07:22:33.685151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.298 [2024-11-20 07:22:33.685175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.298 [2024-11-20 07:22:33.685184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.298 [2024-11-20 07:22:33.690450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.298 [2024-11-20 07:22:33.690472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.298 [2024-11-20 07:22:33.690480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.298 [2024-11-20 07:22:33.695795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.298 [2024-11-20 07:22:33.695817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.298 [2024-11-20 07:22:33.695826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.298 [2024-11-20 07:22:33.701209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.299 [2024-11-20 07:22:33.701232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.299 [2024-11-20 07:22:33.701240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.299 [2024-11-20 07:22:33.706513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.299 [2024-11-20 07:22:33.706535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.299 [2024-11-20 07:22:33.706543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.299 [2024-11-20 07:22:33.711974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.299 [2024-11-20 07:22:33.711996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.299 [2024-11-20 07:22:33.712005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.299 [2024-11-20 07:22:33.717246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.299 [2024-11-20 07:22:33.717268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.299 [2024-11-20 07:22:33.717276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.299 [2024-11-20 07:22:33.722585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.299 [2024-11-20 07:22:33.722607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.299 [2024-11-20 07:22:33.722617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.299 [2024-11-20 07:22:33.727974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.299 [2024-11-20 07:22:33.727996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.299 [2024-11-20 07:22:33.728004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.299 [2024-11-20 07:22:33.733841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.299 [2024-11-20 07:22:33.733863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.299 [2024-11-20 07:22:33.733872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.299 [2024-11-20 07:22:33.739183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.299 [2024-11-20 07:22:33.739204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.299 [2024-11-20 07:22:33.739212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.299 [2024-11-20 07:22:33.744872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.299 [2024-11-20 07:22:33.744894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.299 [2024-11-20 07:22:33.744903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.299 [2024-11-20 07:22:33.751712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.299 [2024-11-20 07:22:33.751734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.299 [2024-11-20 07:22:33.751743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.299 [2024-11-20 07:22:33.758499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.299 [2024-11-20 07:22:33.758521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.299 [2024-11-20 07:22:33.758529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.299 [2024-11-20 07:22:33.764802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.299 [2024-11-20 07:22:33.764824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.299 [2024-11-20 07:22:33.764837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.299 [2024-11-20 07:22:33.771165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.299 [2024-11-20 07:22:33.771187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.299 [2024-11-20 07:22:33.771195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.299 [2024-11-20 07:22:33.777447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.299 [2024-11-20 07:22:33.777470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.299 [2024-11-20 07:22:33.777478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.299 [2024-11-20 07:22:33.783570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.299 [2024-11-20 07:22:33.783591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.299 [2024-11-20 07:22:33.783600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.299 [2024-11-20 07:22:33.789336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.299 [2024-11-20 07:22:33.789359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.299 [2024-11-20 07:22:33.789368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.299 [2024-11-20 07:22:33.794856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.299 [2024-11-20 07:22:33.794878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.299 [2024-11-20 07:22:33.794886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.299 [2024-11-20 07:22:33.800079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.299 [2024-11-20 07:22:33.800100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.299 [2024-11-20 07:22:33.800108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.299 [2024-11-20 07:22:33.805323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.299 [2024-11-20 07:22:33.805345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.299 [2024-11-20 07:22:33.805352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.299 [2024-11-20 07:22:33.810585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.299 [2024-11-20 07:22:33.810607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.299 [2024-11-20 07:22:33.810615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.299 [2024-11-20 07:22:33.815868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.299 [2024-11-20 07:22:33.815892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.299 [2024-11-20 07:22:33.815900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.299 [2024-11-20 07:22:33.821654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.299 [2024-11-20 07:22:33.821676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.299 [2024-11-20 07:22:33.821685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.299 [2024-11-20 07:22:33.828174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.300 [2024-11-20 07:22:33.828196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.300 [2024-11-20 07:22:33.828205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.300 [2024-11-20 07:22:33.836268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.300 [2024-11-20 07:22:33.836291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.300 [2024-11-20 07:22:33.836300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.300 [2024-11-20 07:22:33.842774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.300 [2024-11-20 07:22:33.842796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.300 [2024-11-20 07:22:33.842804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.560 [2024-11-20 07:22:33.848810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.560 [2024-11-20 07:22:33.848832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.560 [2024-11-20 07:22:33.848841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.560 [2024-11-20 07:22:33.854320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.560 [2024-11-20 07:22:33.854340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.560 [2024-11-20 07:22:33.854348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.560 [2024-11-20 07:22:33.861356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.560 [2024-11-20 07:22:33.861377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.560 [2024-11-20 07:22:33.861386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.560 [2024-11-20 07:22:33.868245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.560 [2024-11-20 07:22:33.868266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.560 [2024-11-20 07:22:33.868274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.560 [2024-11-20 07:22:33.874619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.560 [2024-11-20 07:22:33.874641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.560 [2024-11-20 07:22:33.874649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.560 [2024-11-20 07:22:33.881209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.560 [2024-11-20 07:22:33.881231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.560 [2024-11-20 07:22:33.881239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.560 [2024-11-20 07:22:33.888457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.560 [2024-11-20 07:22:33.888480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.560 [2024-11-20 07:22:33.888488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.560 [2024-11-20 07:22:33.894491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.560 [2024-11-20 07:22:33.894511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.560 [2024-11-20 07:22:33.894519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.560 [2024-11-20 07:22:33.901059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.560 [2024-11-20 07:22:33.901081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.560 [2024-11-20 07:22:33.901090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.560 [2024-11-20 07:22:33.907157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.560 [2024-11-20 07:22:33.907179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.560 [2024-11-20 07:22:33.907188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.560 [2024-11-20 07:22:33.912553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.560 [2024-11-20 07:22:33.912575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.560 [2024-11-20 07:22:33.912583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.560 [2024-11-20 07:22:33.917914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.560 [2024-11-20 07:22:33.917936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.560 [2024-11-20 07:22:33.917944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.560 [2024-11-20 07:22:33.923113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.560 [2024-11-20 07:22:33.923134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.560 [2024-11-20 07:22:33.923150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.560 [2024-11-20 07:22:33.925924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.560 [2024-11-20 07:22:33.925946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.561 [2024-11-20 07:22:33.925960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.561 [2024-11-20 07:22:33.931499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.561 [2024-11-20 07:22:33.931520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.561 [2024-11-20 07:22:33.931528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.561 [2024-11-20 07:22:33.937009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.561 [2024-11-20 07:22:33.937031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.561 [2024-11-20 07:22:33.937039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.561 [2024-11-20 07:22:33.942908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.561 [2024-11-20 07:22:33.942929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.561 [2024-11-20 07:22:33.942938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.561 [2024-11-20 07:22:33.948964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.561 [2024-11-20 07:22:33.948985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.561 [2024-11-20 07:22:33.948993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.561 [2024-11-20 07:22:33.954606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.561 [2024-11-20 07:22:33.954627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.561 [2024-11-20 07:22:33.954635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.561 [2024-11-20 07:22:33.960385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.561 [2024-11-20 07:22:33.960406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.561 [2024-11-20 07:22:33.960415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.561 [2024-11-20 07:22:33.965827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.561 [2024-11-20 07:22:33.965849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.561 [2024-11-20 07:22:33.965857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.561 [2024-11-20 07:22:33.971213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.561 [2024-11-20 07:22:33.971235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.561 [2024-11-20 07:22:33.971244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.561 [2024-11-20 07:22:33.976320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.561 [2024-11-20 07:22:33.976341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.561 [2024-11-20 07:22:33.976350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.561 [2024-11-20 07:22:33.981504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.561 [2024-11-20 07:22:33.981527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.561 [2024-11-20 07:22:33.981535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.561 [2024-11-20 07:22:33.986567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.561 [2024-11-20 07:22:33.986590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.561 [2024-11-20 07:22:33.986598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.561 [2024-11-20 07:22:33.991732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.561 [2024-11-20 07:22:33.991754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.561 [2024-11-20 07:22:33.991762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.561 [2024-11-20 07:22:33.997021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.561 [2024-11-20 07:22:33.997043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.561 [2024-11-20 07:22:33.997051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.561 [2024-11-20 07:22:34.002292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.561 [2024-11-20 07:22:34.002315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.561 [2024-11-20 07:22:34.002323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.561 [2024-11-20 07:22:34.007570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.561 [2024-11-20 07:22:34.007591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.561 [2024-11-20 07:22:34.007599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.561 [2024-11-20 07:22:34.012751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.561 [2024-11-20 07:22:34.012772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.561 [2024-11-20 07:22:34.012784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.561 [2024-11-20 07:22:34.018051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.561 [2024-11-20 07:22:34.018073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.561 [2024-11-20 07:22:34.018080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.561 [2024-11-20 07:22:34.023361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.561 [2024-11-20 07:22:34.023381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.561 [2024-11-20 07:22:34.023389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.561 [2024-11-20 07:22:34.028635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.561 [2024-11-20 07:22:34.028656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.561 [2024-11-20 07:22:34.028664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.561 [2024-11-20 07:22:34.033936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.561 [2024-11-20 07:22:34.033964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.561 [2024-11-20 07:22:34.033972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.561 [2024-11-20 07:22:34.039654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.561 [2024-11-20 07:22:34.039675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.561 [2024-11-20 07:22:34.039683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.561 [2024-11-20 07:22:34.046944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.561 [2024-11-20 07:22:34.046972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.562 [2024-11-20 07:22:34.046981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.562 [2024-11-20 07:22:34.053262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.562 [2024-11-20 07:22:34.053284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.562 [2024-11-20 07:22:34.053292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.562 [2024-11-20 07:22:34.059588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.562 [2024-11-20 07:22:34.059608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.562 [2024-11-20 07:22:34.059616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.562 [2024-11-20 07:22:34.066861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.562 [2024-11-20 07:22:34.066887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.562 [2024-11-20 07:22:34.066895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.562 [2024-11-20 07:22:34.074790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.562 [2024-11-20 07:22:34.074812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.562 [2024-11-20 07:22:34.074820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.562 [2024-11-20 07:22:34.082053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.562 [2024-11-20 07:22:34.082075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.562 [2024-11-20 07:22:34.082083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.562 [2024-11-20 07:22:34.088010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.562 [2024-11-20 07:22:34.088033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.562 [2024-11-20 07:22:34.088041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.562 [2024-11-20 07:22:34.093323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.562 [2024-11-20 07:22:34.093346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.562 [2024-11-20 07:22:34.093355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.562 [2024-11-20 07:22:34.098602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.562 [2024-11-20 07:22:34.098624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.562 [2024-11-20 07:22:34.098632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.562 [2024-11-20 07:22:34.104094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.562 [2024-11-20 07:22:34.104116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.562 [2024-11-20 07:22:34.104124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.822 [2024-11-20 07:22:34.109578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.822 [2024-11-20 07:22:34.109600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.822 [2024-11-20 07:22:34.109608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.822 [2024-11-20 07:22:34.115126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.822 [2024-11-20 07:22:34.115148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.822 [2024-11-20 07:22:34.115157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.822 [2024-11-20 07:22:34.120776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.822 [2024-11-20 07:22:34.120797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.822 [2024-11-20 07:22:34.120805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.822 [2024-11-20 07:22:34.124523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.823 [2024-11-20 07:22:34.124545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-11-20 07:22:34.124554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.823 [2024-11-20 07:22:34.129137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.823 [2024-11-20 07:22:34.129157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-11-20 07:22:34.129166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.823 [2024-11-20 07:22:34.134741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.823 [2024-11-20 07:22:34.134763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-11-20 07:22:34.134771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.823 [2024-11-20 07:22:34.140284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.823 [2024-11-20 07:22:34.140305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-11-20 07:22:34.140313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.823 [2024-11-20 07:22:34.145839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.823 [2024-11-20 07:22:34.145861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-11-20 07:22:34.145869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.823 [2024-11-20 07:22:34.151454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.823 [2024-11-20 07:22:34.151476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-11-20 07:22:34.151484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.823 [2024-11-20 07:22:34.157132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.823 [2024-11-20 07:22:34.157154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-11-20 07:22:34.157162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.823 [2024-11-20 07:22:34.162576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.823 [2024-11-20 07:22:34.162599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-11-20 07:22:34.162612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.823 [2024-11-20 07:22:34.167987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.823 [2024-11-20 07:22:34.168008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-11-20 07:22:34.168016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.823 [2024-11-20 07:22:34.173071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.823 [2024-11-20 07:22:34.173092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-11-20 07:22:34.173101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.823 [2024-11-20 07:22:34.178532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.823 [2024-11-20 07:22:34.178554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-11-20 07:22:34.178562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.823 [2024-11-20 07:22:34.183857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.823 [2024-11-20 07:22:34.183880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-11-20 07:22:34.183889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.823 [2024-11-20 07:22:34.189256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.823 [2024-11-20 07:22:34.189279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-11-20 07:22:34.189288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.823 [2024-11-20 07:22:34.194602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.823 [2024-11-20 07:22:34.194625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-11-20 07:22:34.194633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.823 [2024-11-20 07:22:34.199373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.823 [2024-11-20 07:22:34.199395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-11-20 07:22:34.199403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.823 [2024-11-20 07:22:34.204718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.823 [2024-11-20 07:22:34.204740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-11-20 07:22:34.204747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.823 [2024-11-20 07:22:34.210044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.823 [2024-11-20 07:22:34.210069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-11-20 07:22:34.210078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.823 [2024-11-20 07:22:34.215394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.823 [2024-11-20 07:22:34.215416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-11-20 07:22:34.215425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.823 [2024-11-20 07:22:34.220659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.823 [2024-11-20 07:22:34.220680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-11-20 07:22:34.220689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.823 [2024-11-20 07:22:34.225964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.823 [2024-11-20 07:22:34.225985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-11-20 07:22:34.225993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.823 [2024-11-20 07:22:34.231252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.823 [2024-11-20 07:22:34.231273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-11-20 07:22:34.231281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.823 [2024-11-20 07:22:34.236561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.823 [2024-11-20 07:22:34.236583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.823 [2024-11-20 07:22:34.236591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.824 [2024-11-20 07:22:34.241885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.824 [2024-11-20 07:22:34.241906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-11-20 07:22:34.241915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.824 [2024-11-20 07:22:34.247207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.824 [2024-11-20 07:22:34.247229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-11-20 07:22:34.247237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.824 [2024-11-20 07:22:34.252477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.824 [2024-11-20 07:22:34.252499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-11-20 07:22:34.252507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.824 [2024-11-20 07:22:34.257835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.824 [2024-11-20 07:22:34.257856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-11-20 07:22:34.257864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.824 [2024-11-20 07:22:34.263769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.824 [2024-11-20 07:22:34.263791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-11-20 07:22:34.263800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.824 [2024-11-20 07:22:34.269029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.824 [2024-11-20 07:22:34.269052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-11-20 07:22:34.269061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.824 [2024-11-20 07:22:34.274406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.824 [2024-11-20 07:22:34.274427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-11-20 07:22:34.274435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.824 [2024-11-20 07:22:34.279665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.824 [2024-11-20 07:22:34.279687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-11-20 07:22:34.279695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.824 [2024-11-20 07:22:34.284966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.824 [2024-11-20 07:22:34.284987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-11-20 07:22:34.284995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.824 [2024-11-20 07:22:34.290226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.824 [2024-11-20 07:22:34.290248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-11-20 07:22:34.290256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.824 [2024-11-20 07:22:34.295519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.824 [2024-11-20 07:22:34.295540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-11-20 07:22:34.295548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.824 [2024-11-20 07:22:34.300851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.824 [2024-11-20 07:22:34.300872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-11-20 07:22:34.300884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.824 [2024-11-20 07:22:34.306205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.824 [2024-11-20 07:22:34.306228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-11-20 07:22:34.306236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.824 [2024-11-20 07:22:34.311581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.824 [2024-11-20 07:22:34.311602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-11-20 07:22:34.311610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.824 [2024-11-20 07:22:34.316967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.824 [2024-11-20 07:22:34.316989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-11-20 07:22:34.316997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.824 [2024-11-20 07:22:34.322255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.824 [2024-11-20 07:22:34.322276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-11-20 07:22:34.322284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.824 [2024-11-20 07:22:34.327726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.824 [2024-11-20 07:22:34.327747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-11-20 07:22:34.327755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.824 [2024-11-20 07:22:34.332994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.824 [2024-11-20 07:22:34.333015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-11-20 07:22:34.333024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.824 [2024-11-20 07:22:34.338657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.824 [2024-11-20 07:22:34.338679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-11-20 07:22:34.338687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.824 [2024-11-20 07:22:34.346233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.824 [2024-11-20 07:22:34.346255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-11-20 07:22:34.346263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:29.824 [2024-11-20 07:22:34.349727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.824 [2024-11-20 07:22:34.349747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-11-20 07:22:34.349756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:29.824 [2024-11-20 07:22:34.355641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.824 [2024-11-20 07:22:34.355662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.824 [2024-11-20 07:22:34.355670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:29.824 [2024-11-20 07:22:34.361502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.825 [2024-11-20 07:22:34.361523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.825 [2024-11-20 07:22:34.361531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:29.825 [2024-11-20 07:22:34.366825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:29.825 [2024-11-20 07:22:34.366846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.825 [2024-11-20 07:22:34.366854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.084 [2024-11-20 07:22:34.372095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.084 [2024-11-20 07:22:34.372116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.084 [2024-11-20 07:22:34.372125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.084 [2024-11-20 07:22:34.378621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.084 [2024-11-20 07:22:34.378647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-11-20 07:22:34.378655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.085 [2024-11-20 07:22:34.385332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.085 [2024-11-20 07:22:34.385355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-11-20 07:22:34.385363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.085 [2024-11-20 07:22:34.392240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.085 [2024-11-20 07:22:34.392261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-11-20 07:22:34.392269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.085 [2024-11-20 07:22:34.398097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.085 [2024-11-20 07:22:34.398117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-11-20 07:22:34.398129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.085 [2024-11-20 07:22:34.403383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.085 [2024-11-20 07:22:34.403404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-11-20 07:22:34.403413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.085 [2024-11-20 07:22:34.408606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.085 [2024-11-20 07:22:34.408627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-11-20 07:22:34.408636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.085 [2024-11-20 07:22:34.414092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.085 [2024-11-20 07:22:34.414113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-11-20 07:22:34.414121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.085 [2024-11-20 07:22:34.419313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.085 [2024-11-20 07:22:34.419333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-11-20 07:22:34.419341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.085 [2024-11-20 07:22:34.424662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.085 [2024-11-20 07:22:34.424683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-11-20 07:22:34.424691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.085 [2024-11-20 07:22:34.429977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.085 [2024-11-20 07:22:34.430007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-11-20 07:22:34.430016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.085 [2024-11-20 07:22:34.435262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.085 [2024-11-20 07:22:34.435283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-11-20 07:22:34.435291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.085 [2024-11-20 07:22:34.440616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.085 [2024-11-20 07:22:34.440637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-11-20 07:22:34.440646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.085 [2024-11-20 07:22:34.446768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.085 [2024-11-20 07:22:34.446793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-11-20 07:22:34.446802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.085 [2024-11-20 07:22:34.452111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.085 [2024-11-20 07:22:34.452132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-11-20 07:22:34.452141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.085 [2024-11-20 07:22:34.457337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.085 [2024-11-20 07:22:34.457358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-11-20 07:22:34.457366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.085 [2024-11-20 07:22:34.462644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.085 [2024-11-20 07:22:34.462664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-11-20 07:22:34.462672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.085 [2024-11-20 07:22:34.467910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.085 [2024-11-20 07:22:34.467931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-11-20 07:22:34.467939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.085 [2024-11-20 07:22:34.473183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.085 [2024-11-20 07:22:34.473202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-11-20 07:22:34.473210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.085 [2024-11-20 07:22:34.478393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.085 [2024-11-20 07:22:34.478414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-11-20 07:22:34.478422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.085 [2024-11-20 07:22:34.483650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.085 [2024-11-20 07:22:34.483670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-11-20 07:22:34.483678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.085 [2024-11-20 07:22:34.488884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.085 [2024-11-20 07:22:34.488905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-11-20 07:22:34.488914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.085 [2024-11-20 07:22:34.494162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.085 [2024-11-20 07:22:34.494183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.085 [2024-11-20 07:22:34.494191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.085 [2024-11-20 07:22:34.499442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.086 [2024-11-20 07:22:34.499463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.086 [2024-11-20 07:22:34.499471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.086 [2024-11-20 07:22:34.505105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.086 [2024-11-20 07:22:34.505126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.086 [2024-11-20 07:22:34.505134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.086 [2024-11-20 07:22:34.511198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.086 [2024-11-20 07:22:34.511224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.086 [2024-11-20 07:22:34.511232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.086 [2024-11-20 07:22:34.518268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.086 [2024-11-20 07:22:34.518289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.086 [2024-11-20 07:22:34.518297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.086 [2024-11-20 07:22:34.524694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.086 [2024-11-20 07:22:34.524715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.086 [2024-11-20 07:22:34.524723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.086 [2024-11-20 07:22:34.530034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.086 [2024-11-20 07:22:34.530055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.086 [2024-11-20 07:22:34.530063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.086 [2024-11-20 07:22:34.535743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.086 [2024-11-20 07:22:34.535764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.086 [2024-11-20 07:22:34.535771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.086 [2024-11-20 07:22:34.542491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.086 [2024-11-20 07:22:34.542516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.086 [2024-11-20 07:22:34.542524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.086 [2024-11-20 07:22:34.549194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.086 [2024-11-20 07:22:34.549215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.086 [2024-11-20 07:22:34.549224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.086 [2024-11-20 07:22:34.555215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.086 [2024-11-20 07:22:34.555236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.086 [2024-11-20 07:22:34.555245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.086 [2024-11-20 07:22:34.560804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.086 [2024-11-20 07:22:34.560826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.086 [2024-11-20 07:22:34.560835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.086 [2024-11-20 07:22:34.566610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.086 [2024-11-20 07:22:34.566631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.086 [2024-11-20 07:22:34.566639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.086 [2024-11-20 07:22:34.571871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.086 [2024-11-20 07:22:34.571891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.086 [2024-11-20 07:22:34.571899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.086 [2024-11-20 07:22:34.577259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.086 [2024-11-20 07:22:34.577279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.086 [2024-11-20 07:22:34.577287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.086 [2024-11-20 07:22:34.583866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.086 [2024-11-20 07:22:34.583888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.086 [2024-11-20 07:22:34.583896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.086 [2024-11-20 07:22:34.590718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.086 [2024-11-20 07:22:34.590742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.086 [2024-11-20 07:22:34.590751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.086 [2024-11-20 07:22:34.597057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.086 [2024-11-20 07:22:34.597080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.086 [2024-11-20 07:22:34.597088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.086 [2024-11-20 07:22:34.603921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.086 [2024-11-20 07:22:34.603943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.086 [2024-11-20 07:22:34.603958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.086 [2024-11-20 07:22:34.609384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.086 [2024-11-20 07:22:34.609405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.086 [2024-11-20 07:22:34.609413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.086 [2024-11-20 07:22:34.614736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.086 [2024-11-20 07:22:34.614758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.086 [2024-11-20 07:22:34.614766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.086 [2024-11-20 07:22:34.620049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.086 [2024-11-20 07:22:34.620070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.086 [2024-11-20 07:22:34.620080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.086 [2024-11-20 07:22:34.625881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.086 [2024-11-20 07:22:34.625902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.086 [2024-11-20 07:22:34.625910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.086 [2024-11-20 07:22:34.630913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.087 [2024-11-20 07:22:34.630935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.087 [2024-11-20 07:22:34.630944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.347 [2024-11-20 07:22:34.636167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.347 [2024-11-20 07:22:34.636188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.347 [2024-11-20 07:22:34.636196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.347 [2024-11-20 07:22:34.641482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.347 [2024-11-20 07:22:34.641503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.347 [2024-11-20 07:22:34.641517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.347 [2024-11-20 07:22:34.646751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.347 [2024-11-20 07:22:34.646772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.347 [2024-11-20 07:22:34.646780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.347 [2024-11-20 07:22:34.652085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.347 [2024-11-20 07:22:34.652106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.347 [2024-11-20 07:22:34.652114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.347 [2024-11-20 07:22:34.657401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.347 [2024-11-20 07:22:34.657423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.347 [2024-11-20 07:22:34.657431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.347 [2024-11-20 07:22:34.662680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.347 [2024-11-20 07:22:34.662702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.347 [2024-11-20 07:22:34.662710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.347 [2024-11-20 07:22:34.668013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.347 [2024-11-20 07:22:34.668034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.348 [2024-11-20 07:22:34.668041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.348 [2024-11-20 07:22:34.673317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.348 [2024-11-20 07:22:34.673339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.348 [2024-11-20 07:22:34.673358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.348 [2024-11-20 07:22:34.678596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.348 [2024-11-20 07:22:34.678617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.348 [2024-11-20 07:22:34.678625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.348 5495.00 IOPS, 686.88 MiB/s [2024-11-20T06:22:34.904Z] [2024-11-20 07:22:34.684404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.348 [2024-11-20 07:22:34.684425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.348 [2024-11-20 07:22:34.684437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.348 [2024-11-20 07:22:34.689670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.348 [2024-11-20 07:22:34.689695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.348 [2024-11-20 07:22:34.689703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.348 [2024-11-20 07:22:34.695018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.348 [2024-11-20 07:22:34.695040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.348 [2024-11-20 07:22:34.695048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.348 [2024-11-20 07:22:34.700364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.348 [2024-11-20 07:22:34.700385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.348 [2024-11-20 07:22:34.700393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.348 [2024-11-20 07:22:34.706243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.348 [2024-11-20 07:22:34.706265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.348 [2024-11-20 07:22:34.706273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.348 [2024-11-20 07:22:34.712359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.348 [2024-11-20 07:22:34.712381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.348 [2024-11-20 07:22:34.712389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.348 [2024-11-20 07:22:34.719321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.348 [2024-11-20 07:22:34.719343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.348 [2024-11-20 07:22:34.719351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.348 [2024-11-20 07:22:34.725881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.348 [2024-11-20 07:22:34.725903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.348 [2024-11-20 07:22:34.725911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.348 [2024-11-20 07:22:34.733039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.348 [2024-11-20 07:22:34.733061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.348 [2024-11-20 07:22:34.733069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.348 [2024-11-20 07:22:34.740090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.348 [2024-11-20 07:22:34.740113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.348 [2024-11-20 07:22:34.740121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.348 [2024-11-20 07:22:34.745612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.348 [2024-11-20 07:22:34.745634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.348 [2024-11-20 07:22:34.745642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.348 [2024-11-20 07:22:34.751058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.348 [2024-11-20 07:22:34.751080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.348 [2024-11-20 07:22:34.751088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.348 [2024-11-20 07:22:34.756521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.348 [2024-11-20 07:22:34.756543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.348 [2024-11-20 07:22:34.756551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.348 [2024-11-20 07:22:34.761933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.348 [2024-11-20 07:22:34.761962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.348 [2024-11-20 07:22:34.761970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.348 [2024-11-20 07:22:34.767388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.348 [2024-11-20 07:22:34.767409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.348 [2024-11-20 07:22:34.767417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.348 [2024-11-20 07:22:34.772810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.348 [2024-11-20 07:22:34.772831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.348 [2024-11-20 07:22:34.772839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.348 [2024-11-20 07:22:34.778210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.348 [2024-11-20 07:22:34.778230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.348 [2024-11-20 07:22:34.778238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.348 [2024-11-20 07:22:34.783653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.348 [2024-11-20 07:22:34.783675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.348 [2024-11-20 07:22:34.783683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.348 [2024-11-20 07:22:34.789288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.348 [2024-11-20 07:22:34.789312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.348 [2024-11-20 07:22:34.789321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.348 [2024-11-20 07:22:34.794878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.349 [2024-11-20 07:22:34.794900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.349 [2024-11-20 07:22:34.794908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.349 [2024-11-20 07:22:34.800590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.349 [2024-11-20 07:22:34.800612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.349 [2024-11-20 07:22:34.800620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.349 [2024-11-20 07:22:34.806065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.349 [2024-11-20 07:22:34.806085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.349 [2024-11-20 07:22:34.806093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.349 [2024-11-20 07:22:34.811551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.349 [2024-11-20 07:22:34.811572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.349 [2024-11-20 07:22:34.811580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.349 [2024-11-20 07:22:34.817066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.349 [2024-11-20 07:22:34.817088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.349 [2024-11-20 07:22:34.817096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.349 [2024-11-20 07:22:34.822571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.349 [2024-11-20 07:22:34.822593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.349 [2024-11-20 07:22:34.822601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.349 [2024-11-20 07:22:34.828076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.349 [2024-11-20 07:22:34.828097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.349 [2024-11-20 07:22:34.828105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.349 [2024-11-20 07:22:34.833759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.349 [2024-11-20 07:22:34.833780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.349 [2024-11-20 07:22:34.833789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.349 [2024-11-20 07:22:34.839509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.349 [2024-11-20 07:22:34.839530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.349 [2024-11-20 07:22:34.839539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.349 [2024-11-20 07:22:34.845134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.349 [2024-11-20 07:22:34.845156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.349 [2024-11-20 07:22:34.845164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.349 [2024-11-20 07:22:34.850622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.349 [2024-11-20 07:22:34.850644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.349 [2024-11-20 07:22:34.850651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.349 [2024-11-20 07:22:34.856027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.349 [2024-11-20 07:22:34.856048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.349 [2024-11-20 07:22:34.856056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.349 [2024-11-20 07:22:34.861524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.349 [2024-11-20 07:22:34.861545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.349 [2024-11-20 07:22:34.861553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.349 [2024-11-20 07:22:34.866902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.349 [2024-11-20 07:22:34.866923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.349 [2024-11-20 07:22:34.866931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.349 [2024-11-20 07:22:34.872178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.349 [2024-11-20 07:22:34.872199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.349 [2024-11-20 07:22:34.872207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.349 [2024-11-20 07:22:34.877674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.349 [2024-11-20 07:22:34.877695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.349 [2024-11-20 07:22:34.877703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.349 [2024-11-20 07:22:34.883573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.349 [2024-11-20 07:22:34.883595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.349 [2024-11-20 07:22:34.883606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.349 [2024-11-20 07:22:34.889483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.349 [2024-11-20 07:22:34.889505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.349 [2024-11-20 07:22:34.889513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.609 [2024-11-20 07:22:34.896718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.609 [2024-11-20 07:22:34.896741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.609 [2024-11-20 07:22:34.896749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.609 [2024-11-20 07:22:34.904705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.609 [2024-11-20 07:22:34.904727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.609 [2024-11-20 07:22:34.904736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.609 [2024-11-20 07:22:34.912710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.609 [2024-11-20 07:22:34.912733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.609 [2024-11-20 07:22:34.912742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.609 [2024-11-20 07:22:34.920354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.609 [2024-11-20 07:22:34.920376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.609 [2024-11-20 07:22:34.920384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.609 [2024-11-20 07:22:34.927973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.609 [2024-11-20 07:22:34.927995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.610 [2024-11-20 07:22:34.928003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.610 [2024-11-20 07:22:34.936165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.610 [2024-11-20 07:22:34.936186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.610 [2024-11-20 07:22:34.936195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.610 [2024-11-20 07:22:34.943602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.610 [2024-11-20 07:22:34.943624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.610 [2024-11-20 07:22:34.943633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.610 [2024-11-20 07:22:34.950754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.610 [2024-11-20 07:22:34.950781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.610 [2024-11-20 07:22:34.950790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.610 [2024-11-20 07:22:34.958999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.610 [2024-11-20 07:22:34.959022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.610 [2024-11-20 07:22:34.959031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.610 [2024-11-20 07:22:34.966822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.610 [2024-11-20 07:22:34.966844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.610 [2024-11-20 07:22:34.966852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.610 [2024-11-20 07:22:34.973310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.610 [2024-11-20 07:22:34.973332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.610 [2024-11-20 07:22:34.973340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.610 [2024-11-20 07:22:34.979738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.610 [2024-11-20 07:22:34.979760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.610 [2024-11-20 07:22:34.979769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.610 [2024-11-20 07:22:34.985234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.610 [2024-11-20 07:22:34.985256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.610 [2024-11-20 07:22:34.985264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.610 [2024-11-20 07:22:34.990649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.610 [2024-11-20 07:22:34.990669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.610 [2024-11-20 07:22:34.990678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.610 [2024-11-20 07:22:34.996104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.610 [2024-11-20 07:22:34.996124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.610 [2024-11-20 07:22:34.996133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.610 [2024-11-20 07:22:35.001663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.610 [2024-11-20 07:22:35.001684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.610 [2024-11-20 07:22:35.001692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.610 [2024-11-20 07:22:35.007180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.610 [2024-11-20 07:22:35.007201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.610 [2024-11-20 07:22:35.007209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.610 [2024-11-20 07:22:35.012556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.610 [2024-11-20 07:22:35.012579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.610 [2024-11-20 07:22:35.012587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.610 [2024-11-20 07:22:35.017925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.610 [2024-11-20 07:22:35.017952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.610 [2024-11-20 07:22:35.017962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.610 [2024-11-20 07:22:35.023332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.610 [2024-11-20 07:22:35.023353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.610 [2024-11-20 07:22:35.023361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.610 [2024-11-20 07:22:35.029671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.610 [2024-11-20 07:22:35.029694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.610 [2024-11-20 07:22:35.029702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.610 [2024-11-20 07:22:35.035860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.610 [2024-11-20 07:22:35.035881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.610 [2024-11-20 07:22:35.035889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.610 [2024-11-20 07:22:35.041479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.610 [2024-11-20 07:22:35.041500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.610 [2024-11-20 07:22:35.041508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.610 [2024-11-20 07:22:35.047632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.610 [2024-11-20 07:22:35.047654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.610 [2024-11-20 07:22:35.047662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.610 [2024-11-20 07:22:35.054407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.610 [2024-11-20 07:22:35.054429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.610 [2024-11-20 07:22:35.054442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.610 [2024-11-20 07:22:35.060739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.610 [2024-11-20 07:22:35.060761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.610 [2024-11-20 07:22:35.060770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.610 [2024-11-20 07:22:35.066958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.610 [2024-11-20 07:22:35.066980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.611 [2024-11-20 07:22:35.066988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.611 [2024-11-20 07:22:35.073417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.611 [2024-11-20 07:22:35.073439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.611 [2024-11-20 07:22:35.073447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.611 [2024-11-20 07:22:35.079065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.611 [2024-11-20 07:22:35.079086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.611 [2024-11-20 07:22:35.079094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.611 [2024-11-20 07:22:35.084518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.611 [2024-11-20 07:22:35.084539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.611 [2024-11-20 07:22:35.084547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.611 [2024-11-20 07:22:35.090232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.611 [2024-11-20 07:22:35.090254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.611 [2024-11-20 07:22:35.090262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.611 [2024-11-20 07:22:35.095705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.611 [2024-11-20 07:22:35.095727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.611 [2024-11-20 07:22:35.095735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.611 [2024-11-20 07:22:35.101226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.611 [2024-11-20 07:22:35.101247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.611 [2024-11-20 07:22:35.101255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.611 [2024-11-20 07:22:35.106780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.611 [2024-11-20 07:22:35.106801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.611 [2024-11-20 07:22:35.106810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.611 [2024-11-20 07:22:35.112361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.611 [2024-11-20 07:22:35.112382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.611 [2024-11-20 07:22:35.112390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.611 [2024-11-20 07:22:35.117829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.611 [2024-11-20 07:22:35.117850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.611 [2024-11-20 07:22:35.117858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.611 [2024-11-20 07:22:35.123385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.611 [2024-11-20 07:22:35.123406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.611 [2024-11-20 07:22:35.123414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.611 [2024-11-20 07:22:35.128978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.611 [2024-11-20 07:22:35.129000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.611 [2024-11-20 07:22:35.129008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.611 [2024-11-20 07:22:35.134510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.611 [2024-11-20 07:22:35.134532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.611 [2024-11-20 07:22:35.134540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.611 [2024-11-20 07:22:35.140460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.611 [2024-11-20 07:22:35.140482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.611 [2024-11-20 07:22:35.140490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.611 [2024-11-20 07:22:35.146102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.611 [2024-11-20 07:22:35.146123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.611 [2024-11-20 07:22:35.146131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.611 [2024-11-20 07:22:35.151703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.611 [2024-11-20 07:22:35.151723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.611 [2024-11-20 07:22:35.151735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.611 [2024-11-20 07:22:35.155421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.611 [2024-11-20 07:22:35.155442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.611 [2024-11-20 07:22:35.155451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.871 [2024-11-20 07:22:35.159768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.871 [2024-11-20 07:22:35.159791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.871 [2024-11-20 07:22:35.159799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.871 [2024-11-20 07:22:35.165462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.871 [2024-11-20 07:22:35.165483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.871 [2024-11-20 07:22:35.165493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.871 [2024-11-20 07:22:35.171004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.871 [2024-11-20 07:22:35.171025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.871 [2024-11-20 07:22:35.171033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.871 [2024-11-20 07:22:35.176068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.871 [2024-11-20 07:22:35.176090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.871 [2024-11-20 07:22:35.176098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.871 [2024-11-20 07:22:35.181715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.871 [2024-11-20 07:22:35.181736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.871 [2024-11-20 07:22:35.181745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.871 [2024-11-20 07:22:35.187351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.871 [2024-11-20 07:22:35.187373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.871 [2024-11-20 07:22:35.187381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.871 [2024-11-20 07:22:35.192687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.871 [2024-11-20 07:22:35.192708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.871 [2024-11-20 07:22:35.192716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.871 [2024-11-20 07:22:35.198321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.871 [2024-11-20 07:22:35.198346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.871 [2024-11-20 07:22:35.198355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.871 [2024-11-20 07:22:35.203869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.871 [2024-11-20 07:22:35.203891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.871 [2024-11-20 07:22:35.203900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.871 [2024-11-20 07:22:35.207469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.871 [2024-11-20 07:22:35.207490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.871 [2024-11-20 07:22:35.207498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.871 [2024-11-20 07:22:35.211596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.871 [2024-11-20 07:22:35.211617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.871 [2024-11-20 07:22:35.211625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.871 [2024-11-20 07:22:35.216695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.871 [2024-11-20 07:22:35.216716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.871 [2024-11-20 07:22:35.216724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.871 [2024-11-20 07:22:35.221943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.871 [2024-11-20 07:22:35.221974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.871 [2024-11-20 07:22:35.221983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.872 [2024-11-20 07:22:35.227160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.872 [2024-11-20 07:22:35.227181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.872 [2024-11-20 07:22:35.227189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.872 [2024-11-20 07:22:35.233369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.872 [2024-11-20 07:22:35.233391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.872 [2024-11-20 07:22:35.233399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.872 [2024-11-20 07:22:35.240457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.872 [2024-11-20 07:22:35.240479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.872 [2024-11-20 07:22:35.240487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.872 [2024-11-20 07:22:35.248258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.872 [2024-11-20 07:22:35.248280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.872 [2024-11-20 07:22:35.248289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.872 [2024-11-20 07:22:35.253843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.872 [2024-11-20 07:22:35.253865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.872 [2024-11-20 07:22:35.253873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.872 [2024-11-20 07:22:35.259982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.872 [2024-11-20 07:22:35.260004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.872 [2024-11-20 07:22:35.260012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.872 [2024-11-20 07:22:35.266850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.872 [2024-11-20 07:22:35.266871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.872 [2024-11-20 07:22:35.266879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.872 [2024-11-20 07:22:35.273380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.872 [2024-11-20 07:22:35.273402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.872 [2024-11-20 07:22:35.273410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.872 [2024-11-20 07:22:35.280046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.872 [2024-11-20 07:22:35.280067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.872 [2024-11-20 07:22:35.280075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.872 [2024-11-20 07:22:35.286253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.872 [2024-11-20 07:22:35.286274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.872 [2024-11-20 07:22:35.286282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.872 [2024-11-20 07:22:35.291773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.872 [2024-11-20 07:22:35.291794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.872 [2024-11-20 07:22:35.291802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.872 [2024-11-20 07:22:35.297223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.872 [2024-11-20 07:22:35.297242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.872 [2024-11-20 07:22:35.297254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.872 [2024-11-20 07:22:35.302839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.872 [2024-11-20 07:22:35.302860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.872 [2024-11-20 07:22:35.302868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.872 [2024-11-20 07:22:35.308229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.872 [2024-11-20 07:22:35.308250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.872 [2024-11-20 07:22:35.308258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.872 [2024-11-20 07:22:35.313645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.872 [2024-11-20 07:22:35.313666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.872 [2024-11-20 07:22:35.313674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.872 [2024-11-20 07:22:35.319104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.872 [2024-11-20 07:22:35.319125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.872 [2024-11-20 07:22:35.319133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.872 [2024-11-20 07:22:35.324543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.872 [2024-11-20 07:22:35.324564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.872 [2024-11-20 07:22:35.324572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.872 [2024-11-20 07:22:35.329940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.872 [2024-11-20 07:22:35.329967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.872 [2024-11-20 07:22:35.329976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.872 [2024-11-20 07:22:35.335359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.872 [2024-11-20 07:22:35.335380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.872 [2024-11-20 07:22:35.335387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.872 [2024-11-20 07:22:35.340966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.872 [2024-11-20 07:22:35.340987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.872 [2024-11-20 07:22:35.340995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.872 [2024-11-20 07:22:35.346675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.872 [2024-11-20 07:22:35.346700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.872 [2024-11-20 07:22:35.346708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.872 [2024-11-20 07:22:35.352227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.872 [2024-11-20 07:22:35.352248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.873 [2024-11-20 07:22:35.352257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.873 [2024-11-20 07:22:35.357763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.873 [2024-11-20 07:22:35.357785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.873 [2024-11-20 07:22:35.357793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.873 [2024-11-20 07:22:35.363148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.873 [2024-11-20 07:22:35.363169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.873 [2024-11-20 07:22:35.363177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.873 [2024-11-20 07:22:35.368700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.873 [2024-11-20 07:22:35.368722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.873 [2024-11-20 07:22:35.368730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.873 [2024-11-20 07:22:35.374266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.873 [2024-11-20 07:22:35.374288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.873 [2024-11-20 07:22:35.374297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.873 [2024-11-20 07:22:35.379718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.873 [2024-11-20 07:22:35.379738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.873 [2024-11-20 07:22:35.379746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.873 [2024-11-20 07:22:35.385121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.873 [2024-11-20 07:22:35.385143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.873 [2024-11-20 07:22:35.385150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.873 [2024-11-20 07:22:35.390458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.873 [2024-11-20 07:22:35.390480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.873 [2024-11-20 07:22:35.390488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.873 [2024-11-20 07:22:35.395775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.873 [2024-11-20 07:22:35.395796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.873 [2024-11-20 07:22:35.395804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.873 [2024-11-20 07:22:35.401028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.873 [2024-11-20 07:22:35.401048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.873 [2024-11-20 07:22:35.401056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.873 [2024-11-20 07:22:35.406333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.873 [2024-11-20 07:22:35.406353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.873 [2024-11-20 07:22:35.406362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.873 [2024-11-20 07:22:35.411570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.873 [2024-11-20 07:22:35.411591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.873 [2024-11-20 07:22:35.411599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.873 [2024-11-20 07:22:35.417041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:30.873 [2024-11-20 07:22:35.417062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.873 [2024-11-20 07:22:35.417071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.133 [2024-11-20 07:22:35.422551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:31.133 [2024-11-20 07:22:35.422573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.133 [2024-11-20 07:22:35.422580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.133 [2024-11-20 07:22:35.427991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:31.133 [2024-11-20 07:22:35.428012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.133 [2024-11-20 07:22:35.428020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.133 [2024-11-20 07:22:35.433487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:31.133 [2024-11-20 07:22:35.433508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.133 [2024-11-20 07:22:35.433515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.133 [2024-11-20 07:22:35.439074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:31.133 [2024-11-20 07:22:35.439096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.133 [2024-11-20 07:22:35.439107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.133 [2024-11-20 07:22:35.444754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:31.133 [2024-11-20 07:22:35.444776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.133 [2024-11-20 07:22:35.444783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.133 [2024-11-20 07:22:35.450163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:31.133 [2024-11-20 07:22:35.450184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.133 [2024-11-20 07:22:35.450192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.133 [2024-11-20 07:22:35.455732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:31.133 [2024-11-20 07:22:35.455754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.133 [2024-11-20 07:22:35.455762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.133 [2024-11-20 07:22:35.461172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:31.133 [2024-11-20 07:22:35.461193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.133 [2024-11-20 07:22:35.461201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.133 [2024-11-20 07:22:35.466750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:31.133 [2024-11-20 07:22:35.466771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.134 [2024-11-20 07:22:35.466779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.134 [2024-11-20 07:22:35.472180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:31.134 [2024-11-20 07:22:35.472202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.134 [2024-11-20 07:22:35.472210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.134 [2024-11-20 07:22:35.477648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:31.134 [2024-11-20 07:22:35.477669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.134 [2024-11-20 07:22:35.477677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.134 [2024-11-20 07:22:35.483279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:31.134 [2024-11-20 07:22:35.483300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.134 [2024-11-20 07:22:35.483308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.134 [2024-11-20 07:22:35.488794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:31.134 [2024-11-20 07:22:35.488816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.134 [2024-11-20 07:22:35.488824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.134 [2024-11-20 07:22:35.494157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:31.134 [2024-11-20 07:22:35.494177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.134 [2024-11-20 07:22:35.494185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.134 [2024-11-20 07:22:35.499644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:31.134 [2024-11-20 07:22:35.499666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.134 [2024-11-20 07:22:35.499674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.134 [2024-11-20 07:22:35.505094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:31.134 [2024-11-20 07:22:35.505115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.134 [2024-11-20 07:22:35.505123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.134 [2024-11-20 07:22:35.511281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:31.134 [2024-11-20 07:22:35.511303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.134 [2024-11-20 07:22:35.511311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.134 [2024-11-20 07:22:35.517709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:31.134 [2024-11-20 07:22:35.517731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.134 [2024-11-20 07:22:35.517739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.134 [2024-11-20 07:22:35.524817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:31.134 [2024-11-20 07:22:35.524838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.134 [2024-11-20 07:22:35.524847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.134 [2024-11-20 07:22:35.531003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:31.134 [2024-11-20 07:22:35.531023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.134 [2024-11-20 07:22:35.531031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.134 [2024-11-20 07:22:35.536679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:31.134 [2024-11-20 07:22:35.536700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.134 [2024-11-20 07:22:35.536712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.134 [2024-11-20 07:22:35.542102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:31.134 [2024-11-20 07:22:35.542124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.134 [2024-11-20 07:22:35.542131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.134 [2024-11-20 07:22:35.547481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:31.134 [2024-11-20 07:22:35.547502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.134 [2024-11-20 07:22:35.547509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.134 [2024-11-20 07:22:35.552888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:31.134 [2024-11-20 07:22:35.552909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.134 [2024-11-20 07:22:35.552917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.134 [2024-11-20 07:22:35.558272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:31.134 [2024-11-20 07:22:35.558293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.134 [2024-11-20 07:22:35.558302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.134 [2024-11-20 07:22:35.563740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:31.134 [2024-11-20 07:22:35.563761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.134 [2024-11-20 07:22:35.563769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.134 [2024-11-20 07:22:35.569247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:31.134 [2024-11-20 07:22:35.569268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.134 [2024-11-20 07:22:35.569276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.134 [2024-11-20 07:22:35.574738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:31.134 [2024-11-20 07:22:35.574759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.134 [2024-11-20 07:22:35.574767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.134 [2024-11-20 07:22:35.580176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:31.134 [2024-11-20 07:22:35.580197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.134 [2024-11-20 07:22:35.580204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.134 [2024-11-20 07:22:35.585607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:31.134 [2024-11-20 07:22:35.585632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.134 [2024-11-20 07:22:35.585640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.134 [2024-11-20 07:22:35.591072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:31.134 [2024-11-20 07:22:35.591094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.135 [2024-11-20 07:22:35.591102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.135 [2024-11-20 07:22:35.597330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:31.135 [2024-11-20 07:22:35.597352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.135 [2024-11-20 07:22:35.597360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.135 [2024-11-20 07:22:35.604290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:31.135 [2024-11-20 07:22:35.604313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.135 [2024-11-20 07:22:35.604322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.135 [2024-11-20 07:22:35.611294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:31.135 [2024-11-20 07:22:35.611316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.135 [2024-11-20 07:22:35.611325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.135 [2024-11-20 07:22:35.617680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:31.135 [2024-11-20 07:22:35.617703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.135 [2024-11-20 07:22:35.617712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.135 [2024-11-20 07:22:35.624407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:31.135 [2024-11-20 07:22:35.624430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.135 [2024-11-20 07:22:35.624438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.135 [2024-11-20 07:22:35.630769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:31.135 [2024-11-20 07:22:35.630794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.135 [2024-11-20 07:22:35.630802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.135 [2024-11-20 07:22:35.637173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:31.135 [2024-11-20 07:22:35.637196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.135 [2024-11-20 07:22:35.637204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.135 [2024-11-20 07:22:35.643547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:31.135 [2024-11-20 07:22:35.643570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.135 [2024-11-20 07:22:35.643578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.135 [2024-11-20 07:22:35.650089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:31.135 [2024-11-20 07:22:35.650112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.135 [2024-11-20 07:22:35.650120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.135 [2024-11-20 07:22:35.656693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:31.135 [2024-11-20 07:22:35.656716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.135 [2024-11-20 07:22:35.656724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.135 [2024-11-20 07:22:35.663148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:31.135 [2024-11-20 07:22:35.663170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.135 [2024-11-20 07:22:35.663179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.135 [2024-11-20 07:22:35.669414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:31.135 [2024-11-20 07:22:35.669437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.135 [2024-11-20 07:22:35.669445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.135 [2024-11-20 07:22:35.675293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:31.135 [2024-11-20 07:22:35.675321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.135 [2024-11-20 07:22:35.675329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.394 5409.00 IOPS, 676.12 MiB/s [2024-11-20T06:22:35.950Z] [2024-11-20 07:22:35.682977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x109b600) 00:26:31.394 [2024-11-20 07:22:35.683000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.394 [2024-11-20 07:22:35.683008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.394 00:26:31.394 Latency(us) 00:26:31.394 [2024-11-20T06:22:35.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.394 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:31.394 nvme0n1 : 2.00 5410.97 676.37 0.00 0.00 2953.82 669.61 8149.26 00:26:31.394 [2024-11-20T06:22:35.950Z] =================================================================================================================== 00:26:31.394 [2024-11-20T06:22:35.950Z] Total : 5410.97 676.37 0.00 0.00 2953.82 669.61 8149.26 00:26:31.394 { 00:26:31.394 "results": [ 00:26:31.394 { 00:26:31.394 "job": "nvme0n1", 00:26:31.394 "core_mask": "0x2", 00:26:31.394 "workload": "randread", 00:26:31.394 "status": "finished", 00:26:31.394 "queue_depth": 16, 00:26:31.394 "io_size": 131072, 00:26:31.394 "runtime": 2.002229, 00:26:31.394 "iops": 5410.969474520647, 00:26:31.394 "mibps": 676.3711843150809, 00:26:31.394 "io_failed": 0, 00:26:31.394 "io_timeout": 0, 00:26:31.394 "avg_latency_us": 2953.816932523216, 00:26:31.394 "min_latency_us": 669.6069565217391, 00:26:31.394 "max_latency_us": 8149.2591304347825 00:26:31.394 } 00:26:31.394 ], 00:26:31.394 "core_count": 1 00:26:31.394 } 00:26:31.394 07:22:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:31.394 07:22:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:31.394 07:22:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:31.394 | .driver_specific 00:26:31.394 | .nvme_error 00:26:31.394 | .status_code 00:26:31.394 | .command_transient_transport_error' 00:26:31.394 07:22:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:31.394 07:22:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 350 > 0 )) 00:26:31.394 07:22:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1339363 00:26:31.394 07:22:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 1339363 ']' 00:26:31.394 07:22:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 1339363 00:26:31.394 07:22:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:26:31.394 07:22:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:31.394 07:22:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1339363 00:26:31.654 07:22:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:31.654 07:22:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:31.654 07:22:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1339363' 00:26:31.654 killing process with pid 1339363 00:26:31.654 07:22:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 1339363 00:26:31.654 Received shutdown signal, test time was about 2.000000 seconds 00:26:31.654 00:26:31.654 Latency(us) 00:26:31.654 [2024-11-20T06:22:36.210Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.654 [2024-11-20T06:22:36.210Z] =================================================================================================================== 00:26:31.654 [2024-11-20T06:22:36.210Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:31.654 07:22:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 1339363 00:26:31.654 07:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:31.654 07:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:31.654 07:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:31.654 07:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:31.654 07:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:31.654 07:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1340051 00:26:31.654 07:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1340051 /var/tmp/bperf.sock 00:26:31.654 07:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:31.654 07:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 1340051 ']' 00:26:31.654 07:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:31.654 07:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:31.654 07:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:31.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:31.654 07:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:31.654 07:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:31.654 [2024-11-20 07:22:36.157978] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:26:31.654 [2024-11-20 07:22:36.158025] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1340051 ] 00:26:31.913 [2024-11-20 07:22:36.231570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:31.913 [2024-11-20 07:22:36.274939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:31.913 07:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:31.913 07:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:26:31.913 07:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:31.913 07:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:32.172 07:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:32.172 07:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.172 07:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:32.172 07:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.172 07:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:32.172 07:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:32.431 nvme0n1 00:26:32.431 07:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:32.431 07:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.431 07:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:32.431 07:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.431 07:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:32.431 07:22:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:32.690 Running I/O for 2 seconds... 00:26:32.690 [2024-11-20 07:22:37.044372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee27f0 00:26:32.690 [2024-11-20 07:22:37.045290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.690 [2024-11-20 07:22:37.045320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:32.690 [2024-11-20 07:22:37.053432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ede470 00:26:32.690 [2024-11-20 07:22:37.054199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:92 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.690 [2024-11-20 07:22:37.054221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:32.690 [2024-11-20 07:22:37.063092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee49b0 00:26:32.690 [2024-11-20 07:22:37.063879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.690 [2024-11-20 07:22:37.063900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:32.690 [2024-11-20 07:22:37.074267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef1430 00:26:32.690 [2024-11-20 07:22:37.075538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.690 [2024-11-20 07:22:37.075560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:32.690 [2024-11-20 07:22:37.083319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee4578 00:26:32.690 [2024-11-20 07:22:37.084589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.690 [2024-11-20 07:22:37.084608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:32.690 [2024-11-20 07:22:37.091953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016efda78 00:26:32.690 [2024-11-20 07:22:37.093208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.690 [2024-11-20 07:22:37.093228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:32.690 [2024-11-20 07:22:37.101690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee2c28 00:26:32.690 [2024-11-20 07:22:37.102637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.690 [2024-11-20 07:22:37.102657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:32.690 [2024-11-20 07:22:37.110575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016eecc78 00:26:32.690 [2024-11-20 07:22:37.111288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.690 [2024-11-20 07:22:37.111307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:32.690 [2024-11-20 07:22:37.119665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee12d8 00:26:32.690 [2024-11-20 07:22:37.120272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.690 [2024-11-20 07:22:37.120291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:32.690 [2024-11-20 07:22:37.129584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee0a68 00:26:32.690 [2024-11-20 07:22:37.130301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:17158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.690 [2024-11-20 07:22:37.130323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:32.690 [2024-11-20 07:22:37.140161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef7538 00:26:32.690 [2024-11-20 07:22:37.141236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.690 [2024-11-20 07:22:37.141255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:32.690 [2024-11-20 07:22:37.148347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee5220 00:26:32.690 [2024-11-20 07:22:37.148917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.690 [2024-11-20 07:22:37.148935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:32.690 [2024-11-20 07:22:37.157663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee5220 00:26:32.690 [2024-11-20 07:22:37.158365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.690 [2024-11-20 07:22:37.158384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:32.690 [2024-11-20 07:22:37.167055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee5220 00:26:32.690 [2024-11-20 07:22:37.167726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.690 [2024-11-20 07:22:37.167744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:32.691 [2024-11-20 07:22:37.176346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee5220 00:26:32.691 [2024-11-20 07:22:37.177089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.691 [2024-11-20 07:22:37.177107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:32.691 [2024-11-20 07:22:37.185054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016efeb58 00:26:32.691 [2024-11-20 07:22:37.185728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.691 [2024-11-20 07:22:37.185746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:32.691 [2024-11-20 07:22:37.195422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee49b0 00:26:32.691 [2024-11-20 07:22:37.196189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.691 [2024-11-20 07:22:37.196209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:32.691 [2024-11-20 07:22:37.205053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee23b8 00:26:32.691 [2024-11-20 07:22:37.205926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.691 [2024-11-20 07:22:37.205944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:32.691 [2024-11-20 07:22:37.214895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee5a90 00:26:32.691 [2024-11-20 07:22:37.216075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.691 [2024-11-20 07:22:37.216094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:32.691 [2024-11-20 07:22:37.223776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef2510 00:26:32.691 [2024-11-20 07:22:37.224621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.691 [2024-11-20 07:22:37.224639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:32.691 [2024-11-20 07:22:37.233154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016eea680 00:26:32.691 [2024-11-20 07:22:37.233946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.691 [2024-11-20 07:22:37.233968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:32.950 [2024-11-20 07:22:37.243140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef0788 00:26:32.950 [2024-11-20 07:22:37.244000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:6798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.950 [2024-11-20 07:22:37.244020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:32.950 [2024-11-20 07:22:37.252638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef81e0 00:26:32.950 [2024-11-20 07:22:37.253620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.950 [2024-11-20 07:22:37.253640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:32.950 [2024-11-20 07:22:37.261911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef81e0 00:26:32.951 [2024-11-20 07:22:37.262838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.951 [2024-11-20 07:22:37.262856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:32.951 [2024-11-20 07:22:37.270589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef0ff8 00:26:32.951 [2024-11-20 07:22:37.271494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.951 [2024-11-20 07:22:37.271512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:32.951 [2024-11-20 07:22:37.282171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef6cc8 00:26:32.951 [2024-11-20 07:22:37.283549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:9008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.951 [2024-11-20 07:22:37.283566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:32.951 [2024-11-20 07:22:37.288960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee99d8 00:26:32.951 [2024-11-20 07:22:37.289721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.951 [2024-11-20 07:22:37.289740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:32.951 [2024-11-20 07:22:37.300018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016eeff18 00:26:32.951 [2024-11-20 07:22:37.300899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.951 [2024-11-20 07:22:37.300918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:32.951 [2024-11-20 07:22:37.309464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef8a50 00:26:32.951 [2024-11-20 07:22:37.310351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.951 [2024-11-20 07:22:37.310370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:32.951 [2024-11-20 07:22:37.318720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef0788 00:26:32.951 [2024-11-20 07:22:37.319693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.951 [2024-11-20 07:22:37.319712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:32.951 [2024-11-20 07:22:37.328080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef0788 00:26:32.951 [2024-11-20 07:22:37.329025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.951 [2024-11-20 07:22:37.329045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:32.951 [2024-11-20 07:22:37.336779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef81e0 00:26:32.951 [2024-11-20 07:22:37.337653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.951 [2024-11-20 07:22:37.337672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:32.951 [2024-11-20 07:22:37.346846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016efd208 00:26:32.951 [2024-11-20 07:22:37.347778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.951 [2024-11-20 07:22:37.347798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:32.951 [2024-11-20 07:22:37.356498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef31b8 00:26:32.951 [2024-11-20 07:22:37.357228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.951 [2024-11-20 07:22:37.357246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:32.951 [2024-11-20 07:22:37.366182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef6020 00:26:32.951 [2024-11-20 07:22:37.367204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.951 [2024-11-20 07:22:37.367223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:32.951 [2024-11-20 07:22:37.374731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee84c0 00:26:32.951 [2024-11-20 07:22:37.375724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.951 [2024-11-20 07:22:37.375745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:32.951 [2024-11-20 07:22:37.384007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef57b0 00:26:32.951 [2024-11-20 07:22:37.385044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.951 [2024-11-20 07:22:37.385063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:32.951 [2024-11-20 07:22:37.392632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016edf550 00:26:32.951 [2024-11-20 07:22:37.393270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:23054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.951 [2024-11-20 07:22:37.393289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:32.951 [2024-11-20 07:22:37.402072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef31b8 00:26:32.951 [2024-11-20 07:22:37.402570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.951 [2024-11-20 07:22:37.402589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:32.951 [2024-11-20 07:22:37.411800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef0ff8 00:26:32.951 [2024-11-20 07:22:37.412392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.951 [2024-11-20 07:22:37.412411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:32.951 [2024-11-20 07:22:37.420859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef9f68 00:26:32.951 [2024-11-20 07:22:37.421720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.951 [2024-11-20 07:22:37.421739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:32.951 [2024-11-20 07:22:37.430482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef7970 00:26:32.951 [2024-11-20 07:22:37.431444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.951 [2024-11-20 07:22:37.431462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:32.951 [2024-11-20 07:22:37.439946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016eeff18 00:26:32.951 [2024-11-20 07:22:37.440730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.951 [2024-11-20 07:22:37.440749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:32.951 [2024-11-20 07:22:37.450878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef57b0 00:26:32.951 [2024-11-20 07:22:37.452283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.951 [2024-11-20 07:22:37.452301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:32.951 [2024-11-20 07:22:37.460624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016eee190 00:26:32.951 [2024-11-20 07:22:37.462184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:3329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.951 [2024-11-20 07:22:37.462209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:32.952 [2024-11-20 07:22:37.467371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016efeb58 00:26:32.952 [2024-11-20 07:22:37.468087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.952 [2024-11-20 07:22:37.468107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:32.952 [2024-11-20 07:22:37.476811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016eedd58 00:26:32.952 [2024-11-20 07:22:37.477635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.952 [2024-11-20 07:22:37.477653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:32.952 [2024-11-20 07:22:37.486389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee8d30 00:26:32.952 [2024-11-20 07:22:37.487207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.952 [2024-11-20 07:22:37.487226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:32.952 [2024-11-20 07:22:37.495215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016eefae0 00:26:32.952 [2024-11-20 07:22:37.495999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.952 [2024-11-20 07:22:37.496018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:33.211 [2024-11-20 07:22:37.506703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016eefae0 00:26:33.211 [2024-11-20 07:22:37.507991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.211 [2024-11-20 07:22:37.508011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:33.211 [2024-11-20 07:22:37.516449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee1710 00:26:33.211 [2024-11-20 07:22:37.517819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.211 [2024-11-20 07:22:37.517838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:33.211 [2024-11-20 07:22:37.526182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016eebb98 00:26:33.211 [2024-11-20 07:22:37.527726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.211 [2024-11-20 07:22:37.527744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:33.211 [2024-11-20 07:22:37.532805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016efa3a0 00:26:33.211 [2024-11-20 07:22:37.533455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.211 [2024-11-20 07:22:37.533472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:33.211 [2024-11-20 07:22:37.541537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016efd640 00:26:33.211 [2024-11-20 07:22:37.542187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:6637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.211 [2024-11-20 07:22:37.542205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:33.211 [2024-11-20 07:22:37.551297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee4140 00:26:33.211 [2024-11-20 07:22:37.552095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.211 [2024-11-20 07:22:37.552114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:33.211 [2024-11-20 07:22:37.561249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef7100 00:26:33.211 [2024-11-20 07:22:37.562170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.211 [2024-11-20 07:22:37.562188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:33.211 [2024-11-20 07:22:37.571819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016eff3c8 00:26:33.211 [2024-11-20 07:22:37.572877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.211 [2024-11-20 07:22:37.572896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:33.211 [2024-11-20 07:22:37.580603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ede038 00:26:33.211 [2024-11-20 07:22:37.581664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.211 [2024-11-20 07:22:37.581682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:33.211 [2024-11-20 07:22:37.590118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016eeff18 00:26:33.211 [2024-11-20 07:22:37.590706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.211 [2024-11-20 07:22:37.590726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:33.211 [2024-11-20 07:22:37.599810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee6738 00:26:33.211 [2024-11-20 07:22:37.600672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.211 [2024-11-20 07:22:37.600691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:33.211 [2024-11-20 07:22:37.608615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef7100 00:26:33.211 [2024-11-20 07:22:37.609913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.211 [2024-11-20 07:22:37.609932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:33.211 [2024-11-20 07:22:37.617219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016efb480 00:26:33.211 [2024-11-20 07:22:37.617884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.211 [2024-11-20 07:22:37.617902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:33.211 [2024-11-20 07:22:37.626783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016eeb760 00:26:33.211 [2024-11-20 07:22:37.627559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.211 [2024-11-20 07:22:37.627578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:33.211 [2024-11-20 07:22:37.635597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016efcdd0 00:26:33.211 [2024-11-20 07:22:37.636393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.211 [2024-11-20 07:22:37.636411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:33.211 [2024-11-20 07:22:37.645926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016edf988 00:26:33.211 [2024-11-20 07:22:37.646838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.211 [2024-11-20 07:22:37.646857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:33.211 [2024-11-20 07:22:37.655520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee4de8 00:26:33.211 [2024-11-20 07:22:37.656567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.211 [2024-11-20 07:22:37.656586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:33.211 [2024-11-20 07:22:37.664958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee27f0 00:26:33.211 [2024-11-20 07:22:37.666026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.211 [2024-11-20 07:22:37.666044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:33.211 [2024-11-20 07:22:37.674225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee8d30 00:26:33.211 [2024-11-20 07:22:37.675271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.211 [2024-11-20 07:22:37.675290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:33.211 [2024-11-20 07:22:37.683552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef9f68 00:26:33.211 [2024-11-20 07:22:37.684624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.211 [2024-11-20 07:22:37.684642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:33.211 [2024-11-20 07:22:37.692842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016eebfd0 00:26:33.211 [2024-11-20 07:22:37.693919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:10378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.211 [2024-11-20 07:22:37.693938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:33.211 [2024-11-20 07:22:37.702212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef2948 00:26:33.211 [2024-11-20 07:22:37.703302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.211 [2024-11-20 07:22:37.703325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:33.211 [2024-11-20 07:22:37.711535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef7100 00:26:33.211 [2024-11-20 07:22:37.712619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:18861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.211 [2024-11-20 07:22:37.712638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:33.212 [2024-11-20 07:22:37.720819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee9168 00:26:33.212 [2024-11-20 07:22:37.721873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.212 [2024-11-20 07:22:37.721891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:33.212 [2024-11-20 07:22:37.730097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016efda78 00:26:33.212 [2024-11-20 07:22:37.731154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.212 [2024-11-20 07:22:37.731173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:33.212 [2024-11-20 07:22:37.739414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016eedd58 00:26:33.212 [2024-11-20 07:22:37.740502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.212 [2024-11-20 07:22:37.740520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:33.212 [2024-11-20 07:22:37.748911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee88f8 00:26:33.212 [2024-11-20 07:22:37.749966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.212 [2024-11-20 07:22:37.749984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:33.212 [2024-11-20 07:22:37.758302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016efc128 00:26:33.212 [2024-11-20 07:22:37.759393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.212 [2024-11-20 07:22:37.759413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:33.474 [2024-11-20 07:22:37.767738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef6cc8 00:26:33.474 [2024-11-20 07:22:37.768817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.474 [2024-11-20 07:22:37.768835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:33.474 [2024-11-20 07:22:37.777037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016efeb58 00:26:33.474 [2024-11-20 07:22:37.778121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.474 [2024-11-20 07:22:37.778139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:33.474 [2024-11-20 07:22:37.786386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee49b0 00:26:33.474 [2024-11-20 07:22:37.787464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.474 [2024-11-20 07:22:37.787482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:33.474 [2024-11-20 07:22:37.795707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef8618 00:26:33.474 [2024-11-20 07:22:37.796784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.474 [2024-11-20 07:22:37.796802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:33.474 [2024-11-20 07:22:37.805019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef6890 00:26:33.474 [2024-11-20 07:22:37.806101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.474 [2024-11-20 07:22:37.806119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:33.474 [2024-11-20 07:22:37.814520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef4f40 00:26:33.474 [2024-11-20 07:22:37.815555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.474 [2024-11-20 07:22:37.815574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:33.474 [2024-11-20 07:22:37.823220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef0350 00:26:33.474 [2024-11-20 07:22:37.824209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.474 [2024-11-20 07:22:37.824228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:33.474 [2024-11-20 07:22:37.831992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee84c0 00:26:33.474 [2024-11-20 07:22:37.832685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.474 [2024-11-20 07:22:37.832704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:33.474 [2024-11-20 07:22:37.841139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee1710 00:26:33.474 [2024-11-20 07:22:37.841822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.474 [2024-11-20 07:22:37.841840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:33.474 [2024-11-20 07:22:37.850441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016eee5c8 00:26:33.474 [2024-11-20 07:22:37.851121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.474 [2024-11-20 07:22:37.851139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:33.474 [2024-11-20 07:22:37.859767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee99d8 00:26:33.474 [2024-11-20 07:22:37.860495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.474 [2024-11-20 07:22:37.860514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:33.474 [2024-11-20 07:22:37.869131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef1868 00:26:33.474 [2024-11-20 07:22:37.869845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.474 [2024-11-20 07:22:37.869863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:33.474 [2024-11-20 07:22:37.878452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef6020 00:26:33.474 [2024-11-20 07:22:37.879144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.474 [2024-11-20 07:22:37.879162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:33.474 [2024-11-20 07:22:37.887728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee0ea0 00:26:33.474 [2024-11-20 07:22:37.888413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.474 [2024-11-20 07:22:37.888432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:33.474 [2024-11-20 07:22:37.897297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee3060 00:26:33.474 [2024-11-20 07:22:37.897790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.474 [2024-11-20 07:22:37.897809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:33.474 [2024-11-20 07:22:37.907046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee5a90 00:26:33.474 [2024-11-20 07:22:37.907626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.474 [2024-11-20 07:22:37.907644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:33.474 [2024-11-20 07:22:37.916073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016edf550 00:26:33.474 [2024-11-20 07:22:37.916928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.474 [2024-11-20 07:22:37.916952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:33.474 [2024-11-20 07:22:37.925655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016eec840 00:26:33.474 [2024-11-20 07:22:37.926629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.474 [2024-11-20 07:22:37.926648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:33.474 [2024-11-20 07:22:37.935457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016efc128 00:26:33.474 [2024-11-20 07:22:37.936503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:11154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.474 [2024-11-20 07:22:37.936521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:33.474 [2024-11-20 07:22:37.947083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef6890 00:26:33.474 [2024-11-20 07:22:37.948640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:17949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.474 [2024-11-20 07:22:37.948661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:33.474 [2024-11-20 07:22:37.953905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016eecc78 00:26:33.474 [2024-11-20 07:22:37.954782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.474 [2024-11-20 07:22:37.954800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:33.474 [2024-11-20 07:22:37.963383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016eee5c8 00:26:33.475 [2024-11-20 07:22:37.964213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.475 [2024-11-20 07:22:37.964231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:33.475 [2024-11-20 07:22:37.973048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee9e10 00:26:33.475 [2024-11-20 07:22:37.973906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.475 [2024-11-20 07:22:37.973924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:33.475 [2024-11-20 07:22:37.982374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee99d8 00:26:33.475 [2024-11-20 07:22:37.983341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.475 [2024-11-20 07:22:37.983359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:33.475 [2024-11-20 07:22:37.992051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016efe2e8 00:26:33.475 [2024-11-20 07:22:37.993131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.475 [2024-11-20 07:22:37.993150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:33.475 [2024-11-20 07:22:38.001487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016efc128 00:26:33.475 [2024-11-20 07:22:38.002611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.475 [2024-11-20 07:22:38.002630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:33.475 [2024-11-20 07:22:38.011055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee2c28 00:26:33.475 [2024-11-20 07:22:38.011790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.475 [2024-11-20 07:22:38.011809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:33.475 [2024-11-20 07:22:38.020247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee84c0 00:26:33.475 [2024-11-20 07:22:38.021288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.475 [2024-11-20 07:22:38.021307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:33.735 [2024-11-20 07:22:38.029823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016efe720 00:26:33.736 [2024-11-20 07:22:38.031702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.736 [2024-11-20 07:22:38.031721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:33.736 26979.00 IOPS, 105.39 MiB/s [2024-11-20T06:22:38.292Z] [2024-11-20 07:22:38.041350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef81e0 00:26:33.736 [2024-11-20 07:22:38.042967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.736 [2024-11-20 07:22:38.042986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:33.736 [2024-11-20 07:22:38.048147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef20d8 00:26:33.736 [2024-11-20 07:22:38.049014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.736 [2024-11-20 07:22:38.049033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:33.736 [2024-11-20 07:22:38.059484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee9e10 00:26:33.736 [2024-11-20 07:22:38.060749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.736 [2024-11-20 07:22:38.060767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:33.736 [2024-11-20 07:22:38.067277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016efac10 00:26:33.736 [2024-11-20 07:22:38.067927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.736 [2024-11-20 07:22:38.067949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:33.736 [2024-11-20 07:22:38.077308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016efd208 00:26:33.736 [2024-11-20 07:22:38.078305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.736 [2024-11-20 07:22:38.078324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:33.736 [2024-11-20 07:22:38.087216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016efc560 00:26:33.736 [2024-11-20 07:22:38.088333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.736 [2024-11-20 07:22:38.088353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:33.736 [2024-11-20 07:22:38.096970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016efa7d8 00:26:33.736 [2024-11-20 07:22:38.098225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:24152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.736 [2024-11-20 07:22:38.098244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:33.736 [2024-11-20 07:22:38.106671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef35f0 00:26:33.736 [2024-11-20 07:22:38.107998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.736 [2024-11-20 07:22:38.108017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:33.736 [2024-11-20 07:22:38.116423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee27f0 00:26:33.736 [2024-11-20 07:22:38.117885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.736 [2024-11-20 07:22:38.117904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:33.736 [2024-11-20 07:22:38.125127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee12d8 00:26:33.736 [2024-11-20 07:22:38.126160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.736 [2024-11-20 07:22:38.126178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:33.736 [2024-11-20 07:22:38.134614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016eed4e8 00:26:33.736 [2024-11-20 07:22:38.135863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.736 [2024-11-20 07:22:38.135882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:33.736 [2024-11-20 07:22:38.141600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016efb048 00:26:33.736 [2024-11-20 07:22:38.142317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.736 [2024-11-20 07:22:38.142335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:33.736 [2024-11-20 07:22:38.151353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef4f40 00:26:33.736 [2024-11-20 07:22:38.152239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.736 [2024-11-20 07:22:38.152257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:33.736 [2024-11-20 07:22:38.162621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef0350 00:26:33.736 [2024-11-20 07:22:38.163857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.736 [2024-11-20 07:22:38.163876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:33.736 [2024-11-20 07:22:38.172008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee4578 00:26:33.736 [2024-11-20 07:22:38.173353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.736 [2024-11-20 07:22:38.173372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:33.736 [2024-11-20 07:22:38.180890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef2510 00:26:33.736 [2024-11-20 07:22:38.181912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.736 [2024-11-20 07:22:38.181931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:33.736 [2024-11-20 07:22:38.190249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef1868 00:26:33.736 [2024-11-20 07:22:38.191268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.736 [2024-11-20 07:22:38.191291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:33.736 [2024-11-20 07:22:38.199351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee9168 00:26:33.736 [2024-11-20 07:22:38.200266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.736 [2024-11-20 07:22:38.200285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:33.736 [2024-11-20 07:22:38.209054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef57b0 00:26:33.736 [2024-11-20 07:22:38.210082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.736 [2024-11-20 07:22:38.210101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.736 [2024-11-20 07:22:38.218968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee99d8 00:26:33.736 [2024-11-20 07:22:38.220100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.736 [2024-11-20 07:22:38.220118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:33.736 [2024-11-20 07:22:38.228436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee0a68 00:26:33.736 [2024-11-20 07:22:38.229595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.736 [2024-11-20 07:22:38.229615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.737 [2024-11-20 07:22:38.237573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016efc998 00:26:33.737 [2024-11-20 07:22:38.238679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.737 [2024-11-20 07:22:38.238697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:33.737 [2024-11-20 07:22:38.247001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef7da8 00:26:33.737 [2024-11-20 07:22:38.247653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.737 [2024-11-20 07:22:38.247672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.737 [2024-11-20 07:22:38.256051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef5be8 00:26:33.737 [2024-11-20 07:22:38.257004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.737 [2024-11-20 07:22:38.257023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:33.737 [2024-11-20 07:22:38.265401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016eeb328 00:26:33.737 [2024-11-20 07:22:38.266293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.737 [2024-11-20 07:22:38.266311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.737 [2024-11-20 07:22:38.275092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016efbcf0 00:26:33.737 [2024-11-20 07:22:38.276076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.737 [2024-11-20 07:22:38.276095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:33.996 [2024-11-20 07:22:38.284999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef7da8 00:26:33.996 [2024-11-20 07:22:38.286148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.996 [2024-11-20 07:22:38.286168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.996 [2024-11-20 07:22:38.294012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef1430 00:26:33.996 [2024-11-20 07:22:38.295116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.996 [2024-11-20 07:22:38.295134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:33.996 [2024-11-20 07:22:38.303711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016efeb58 00:26:33.996 [2024-11-20 07:22:38.304984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.997 [2024-11-20 07:22:38.305003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:33.997 [2024-11-20 07:22:38.311904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee8d30 00:26:33.997 [2024-11-20 07:22:38.312475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.997 [2024-11-20 07:22:38.312494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:33.997 [2024-11-20 07:22:38.321632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ede470 00:26:33.997 [2024-11-20 07:22:38.322550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.997 [2024-11-20 07:22:38.322569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:33.997 [2024-11-20 07:22:38.332501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016edece0 00:26:33.997 [2024-11-20 07:22:38.333913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.997 [2024-11-20 07:22:38.333932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:33.997 [2024-11-20 07:22:38.341475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016eea680 00:26:33.997 [2024-11-20 07:22:38.342525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.997 [2024-11-20 07:22:38.342544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:33.997 [2024-11-20 07:22:38.351119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee2c28 00:26:33.997 [2024-11-20 07:22:38.352279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.997 [2024-11-20 07:22:38.352296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:33.997 [2024-11-20 07:22:38.360528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef2510 00:26:33.997 [2024-11-20 07:22:38.361242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.997 [2024-11-20 07:22:38.361260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:33.997 [2024-11-20 07:22:38.369661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef0788 00:26:33.997 [2024-11-20 07:22:38.370603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:14851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.997 [2024-11-20 07:22:38.370622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:33.997 [2024-11-20 07:22:38.379223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee38d0 00:26:33.997 [2024-11-20 07:22:38.380264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.997 [2024-11-20 07:22:38.380281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:33.997 [2024-11-20 07:22:38.388649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef1868 00:26:33.997 [2024-11-20 07:22:38.389706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.997 [2024-11-20 07:22:38.389725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:33.997 [2024-11-20 07:22:38.398235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef7970 00:26:33.997 [2024-11-20 07:22:38.399283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.997 [2024-11-20 07:22:38.399301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:33.997 [2024-11-20 07:22:38.406959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016eec840 00:26:33.997 [2024-11-20 07:22:38.407876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.997 [2024-11-20 07:22:38.407894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:33.997 [2024-11-20 07:22:38.416567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016eec840 00:26:33.997 [2024-11-20 07:22:38.417432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.997 [2024-11-20 07:22:38.417451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:33.997 [2024-11-20 07:22:38.427062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016eec840 00:26:33.997 [2024-11-20 07:22:38.428475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.997 [2024-11-20 07:22:38.428493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:33.997 [2024-11-20 07:22:38.436831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016efdeb0 00:26:33.997 [2024-11-20 07:22:38.438365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.997 [2024-11-20 07:22:38.438383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:33.997 [2024-11-20 07:22:38.443387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef1ca0 00:26:33.997 [2024-11-20 07:22:38.444050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.997 [2024-11-20 07:22:38.444068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:33.997 [2024-11-20 07:22:38.453237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee8088 00:26:33.997 [2024-11-20 07:22:38.454157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.997 [2024-11-20 07:22:38.454174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:33.997 [2024-11-20 07:22:38.462895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016eeee38 00:26:33.997 [2024-11-20 07:22:38.463930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.997 [2024-11-20 07:22:38.463951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:33.997 [2024-11-20 07:22:38.472378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee6300 00:26:33.997 [2024-11-20 07:22:38.473452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.997 [2024-11-20 07:22:38.473471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:33.997 [2024-11-20 07:22:38.481446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef7100 00:26:33.997 [2024-11-20 07:22:38.482378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.997 [2024-11-20 07:22:38.482396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:33.997 [2024-11-20 07:22:38.492545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef7100 00:26:33.997 [2024-11-20 07:22:38.494034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.997 [2024-11-20 07:22:38.494053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:33.998 [2024-11-20 07:22:38.499296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016eedd58 00:26:33.998 [2024-11-20 07:22:38.499881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.998 [2024-11-20 07:22:38.499901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:33.998 [2024-11-20 07:22:38.509024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee23b8 00:26:33.998 [2024-11-20 07:22:38.509807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.998 [2024-11-20 07:22:38.509826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:33.998 [2024-11-20 07:22:38.518825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee0ea0 00:26:33.998 [2024-11-20 07:22:38.519671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.998 [2024-11-20 07:22:38.519694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:33.998 [2024-11-20 07:22:38.528603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee7818 00:26:33.998 [2024-11-20 07:22:38.529627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.998 [2024-11-20 07:22:38.529647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:33.998 [2024-11-20 07:22:38.538570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016efcdd0 00:26:33.998 [2024-11-20 07:22:38.539740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.998 [2024-11-20 07:22:38.539758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:34.258 [2024-11-20 07:22:38.546630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef4f40 00:26:34.258 [2024-11-20 07:22:38.547087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.258 [2024-11-20 07:22:38.547106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:34.258 [2024-11-20 07:22:38.557486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016eeaab8 00:26:34.258 [2024-11-20 07:22:38.558546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.258 [2024-11-20 07:22:38.558565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:34.258 [2024-11-20 07:22:38.566724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016efbcf0 00:26:34.258 [2024-11-20 07:22:38.567929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.258 [2024-11-20 07:22:38.567955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:34.258 [2024-11-20 07:22:38.576740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef7100 00:26:34.258 [2024-11-20 07:22:38.578130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.258 [2024-11-20 07:22:38.578148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:34.258 [2024-11-20 07:22:38.586662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef5be8 00:26:34.258 [2024-11-20 07:22:38.588187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.258 [2024-11-20 07:22:38.588206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:34.258 [2024-11-20 07:22:38.593570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016eeb760 00:26:34.259 [2024-11-20 07:22:38.594342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.259 [2024-11-20 07:22:38.594361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:34.259 [2024-11-20 07:22:38.604879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef96f8 00:26:34.259 [2024-11-20 07:22:38.606068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.259 [2024-11-20 07:22:38.606087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:34.259 [2024-11-20 07:22:38.613664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016efeb58 00:26:34.259 [2024-11-20 07:22:38.614850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.259 [2024-11-20 07:22:38.614869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:34.259 [2024-11-20 07:22:38.622422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef0788 00:26:34.259 [2024-11-20 07:22:38.623415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:3181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.259 [2024-11-20 07:22:38.623434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:34.259 [2024-11-20 07:22:38.631436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef3a28 00:26:34.259 [2024-11-20 07:22:38.632047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.259 [2024-11-20 07:22:38.632066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:34.259 [2024-11-20 07:22:38.642453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef6458 00:26:34.259 [2024-11-20 07:22:38.643620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.259 [2024-11-20 07:22:38.643638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:34.259 [2024-11-20 07:22:38.651466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ede8a8 00:26:34.259 [2024-11-20 07:22:38.652555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.259 [2024-11-20 07:22:38.652574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:34.259 [2024-11-20 07:22:38.660621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee5220 00:26:34.259 [2024-11-20 07:22:38.661449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.259 [2024-11-20 07:22:38.661468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:34.259 [2024-11-20 07:22:38.669122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee3d08 00:26:34.259 [2024-11-20 07:22:38.669970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.259 [2024-11-20 07:22:38.669989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:34.259 [2024-11-20 07:22:38.679093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee3d08 00:26:34.259 [2024-11-20 07:22:38.679934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:11845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.259 [2024-11-20 07:22:38.679958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:34.259 [2024-11-20 07:22:38.688372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee3d08 00:26:34.259 [2024-11-20 07:22:38.689213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.259 [2024-11-20 07:22:38.689232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:34.259 [2024-11-20 07:22:38.697718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee3d08 00:26:34.259 [2024-11-20 07:22:38.698570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.259 [2024-11-20 07:22:38.698588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:34.259 [2024-11-20 07:22:38.706675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee01f8 00:26:34.259 [2024-11-20 07:22:38.707550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.259 [2024-11-20 07:22:38.707569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:34.259 [2024-11-20 07:22:38.715491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee5220 00:26:34.259 [2024-11-20 07:22:38.716143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.259 [2024-11-20 07:22:38.716161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:34.259 [2024-11-20 07:22:38.724422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef2948 00:26:34.260 [2024-11-20 07:22:38.725087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.260 [2024-11-20 07:22:38.725106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:34.260 [2024-11-20 07:22:38.735774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee2c28 00:26:34.260 [2024-11-20 07:22:38.736876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.260 [2024-11-20 07:22:38.736895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:34.260 [2024-11-20 07:22:38.745258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016eec408 00:26:34.260 [2024-11-20 07:22:38.746369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.260 [2024-11-20 07:22:38.746388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:34.260 [2024-11-20 07:22:38.754468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef5378 00:26:34.260 [2024-11-20 07:22:38.755549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.260 [2024-11-20 07:22:38.755568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:34.260 [2024-11-20 07:22:38.764222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee88f8 00:26:34.260 [2024-11-20 07:22:38.765533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.260 [2024-11-20 07:22:38.765555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:34.260 [2024-11-20 07:22:38.773703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef6020 00:26:34.260 [2024-11-20 07:22:38.774935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.260 [2024-11-20 07:22:38.774958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:34.260 [2024-11-20 07:22:38.782133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016eeb328 00:26:34.260 [2024-11-20 07:22:38.783116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.260 [2024-11-20 07:22:38.783145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:34.260 [2024-11-20 07:22:38.791579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ede8a8 00:26:34.260 [2024-11-20 07:22:38.792567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.260 [2024-11-20 07:22:38.792586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:34.260 [2024-11-20 07:22:38.801491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ede8a8 00:26:34.260 [2024-11-20 07:22:38.802466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.260 [2024-11-20 07:22:38.802485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:34.520 [2024-11-20 07:22:38.811008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ede8a8 00:26:34.520 [2024-11-20 07:22:38.811984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.520 [2024-11-20 07:22:38.812003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:34.520 [2024-11-20 07:22:38.820620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ede470 00:26:34.520 [2024-11-20 07:22:38.821815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:18265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.520 [2024-11-20 07:22:38.821833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:34.520 [2024-11-20 07:22:38.830196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016eeaef0 00:26:34.520 [2024-11-20 07:22:38.831525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.520 [2024-11-20 07:22:38.831544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:34.520 [2024-11-20 07:22:38.840113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016eff3c8 00:26:34.520 [2024-11-20 07:22:38.841535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.520 [2024-11-20 07:22:38.841553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:34.520 [2024-11-20 07:22:38.849728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016eee5c8 00:26:34.520 [2024-11-20 07:22:38.851080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.520 [2024-11-20 07:22:38.851098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:34.520 [2024-11-20 07:22:38.857506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef4f40 00:26:34.520 [2024-11-20 07:22:38.858485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.520 [2024-11-20 07:22:38.858503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:34.520 [2024-11-20 07:22:38.868576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef4f40 00:26:34.520 [2024-11-20 07:22:38.870034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.520 [2024-11-20 07:22:38.870053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:34.520 [2024-11-20 07:22:38.875183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef1430 00:26:34.520 [2024-11-20 07:22:38.875781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:24434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.520 [2024-11-20 07:22:38.875800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:34.520 [2024-11-20 07:22:38.884641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee5a90 00:26:34.520 [2024-11-20 07:22:38.885249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:14084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.520 [2024-11-20 07:22:38.885268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:34.520 [2024-11-20 07:22:38.893958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee5a90 00:26:34.520 [2024-11-20 07:22:38.894556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.520 [2024-11-20 07:22:38.894577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:34.520 [2024-11-20 07:22:38.904863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef9f68 00:26:34.520 [2024-11-20 07:22:38.905901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:11084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.520 [2024-11-20 07:22:38.905920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:34.520 [2024-11-20 07:22:38.913816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef46d0 00:26:34.520 [2024-11-20 07:22:38.914983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.520 [2024-11-20 07:22:38.915001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:34.520 [2024-11-20 07:22:38.923561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016efa7d8 00:26:34.520 [2024-11-20 07:22:38.924790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.520 [2024-11-20 07:22:38.924808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:34.520 [2024-11-20 07:22:38.933333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016eeee38 00:26:34.520 [2024-11-20 07:22:38.934745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.520 [2024-11-20 07:22:38.934763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:34.520 [2024-11-20 07:22:38.940061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016eddc00 00:26:34.520 [2024-11-20 07:22:38.940617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.520 [2024-11-20 07:22:38.940635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:34.520 [2024-11-20 07:22:38.949767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef46d0 00:26:34.520 [2024-11-20 07:22:38.950571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.520 [2024-11-20 07:22:38.950590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:34.521 [2024-11-20 07:22:38.959558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ee5ec8 00:26:34.521 [2024-11-20 07:22:38.960440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.521 [2024-11-20 07:22:38.960459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:34.521 [2024-11-20 07:22:38.969210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016eef6a8 00:26:34.521 [2024-11-20 07:22:38.970230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.521 [2024-11-20 07:22:38.970249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:34.521 [2024-11-20 07:22:38.977903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef9f68 00:26:34.521 [2024-11-20 07:22:38.978600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:18321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.521 [2024-11-20 07:22:38.978619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:34.521 [2024-11-20 07:22:38.987368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef1ca0 00:26:34.521 [2024-11-20 07:22:38.987806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.521 [2024-11-20 07:22:38.987824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:34.521 [2024-11-20 07:22:38.997106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016eeaab8 00:26:34.521 [2024-11-20 07:22:38.997691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.521 [2024-11-20 07:22:38.997710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:34.521 [2024-11-20 07:22:39.006845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016eddc00 00:26:34.521 [2024-11-20 07:22:39.007526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.521 [2024-11-20 07:22:39.007549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:34.521 [2024-11-20 07:22:39.015644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef7100 00:26:34.521 [2024-11-20 07:22:39.016905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.521 [2024-11-20 07:22:39.016924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:34.521 [2024-11-20 07:22:39.024269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016eddc00 00:26:34.521 [2024-11-20 07:22:39.024940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.521 [2024-11-20 07:22:39.024964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:34.521 [2024-11-20 07:22:39.033899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb180) with pdu=0x200016ef2510 00:26:34.521 27119.00 IOPS, 105.93 MiB/s [2024-11-20T06:22:39.077Z] [2024-11-20 07:22:39.034698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:34.521 [2024-11-20 07:22:39.034716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:34.521 00:26:34.521 Latency(us) 00:26:34.521 [2024-11-20T06:22:39.077Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:34.521 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:34.521 nvme0n1 : 2.00 27126.28 105.96 0.00 0.00 4712.41 1802.24 13335.15 00:26:34.521 [2024-11-20T06:22:39.077Z] =================================================================================================================== 00:26:34.521 [2024-11-20T06:22:39.077Z] Total : 27126.28 105.96 0.00 0.00 4712.41 1802.24 13335.15 00:26:34.521 { 00:26:34.521 "results": [ 00:26:34.521 { 00:26:34.521 "job": "nvme0n1", 00:26:34.521 "core_mask": "0x2", 00:26:34.521 "workload": "randwrite", 00:26:34.521 "status": "finished", 00:26:34.521 "queue_depth": 128, 00:26:34.521 "io_size": 4096, 00:26:34.521 "runtime": 2.004182, 00:26:34.521 "iops": 27126.278950714055, 00:26:34.521 "mibps": 105.96202715122678, 00:26:34.521 "io_failed": 0, 00:26:34.521 "io_timeout": 0, 00:26:34.521 "avg_latency_us": 4712.414623845785, 00:26:34.521 "min_latency_us": 1802.24, 00:26:34.521 "max_latency_us": 13335.151304347826 00:26:34.521 } 00:26:34.521 ], 00:26:34.521 "core_count": 1 00:26:34.521 } 00:26:34.521 07:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:34.521 07:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:34.521 07:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:34.521 | .driver_specific 00:26:34.521 | .nvme_error 00:26:34.521 | .status_code 00:26:34.521 | .command_transient_transport_error' 00:26:34.521 07:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:34.780 07:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 213 > 0 )) 00:26:34.780 07:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1340051 00:26:34.780 07:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 1340051 ']' 00:26:34.780 07:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 1340051 00:26:34.780 07:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:26:34.780 07:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:34.780 07:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1340051 00:26:34.780 07:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:34.780 07:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:34.780 07:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1340051' 00:26:34.780 killing process with pid 1340051 00:26:34.780 07:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 1340051 00:26:34.780 Received shutdown signal, test time was about 2.000000 seconds 00:26:34.780 00:26:34.780 Latency(us) 00:26:34.780 [2024-11-20T06:22:39.336Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:34.780 [2024-11-20T06:22:39.336Z] =================================================================================================================== 00:26:34.780 [2024-11-20T06:22:39.336Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:35.039 07:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 1340051 00:26:35.039 07:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:35.039 07:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:35.039 07:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:35.039 07:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:35.039 07:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:35.039 07:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1340522 00:26:35.039 07:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1340522 /var/tmp/bperf.sock 00:26:35.039 07:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:35.039 07:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 1340522 ']' 00:26:35.039 07:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:35.040 07:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:35.040 07:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:35.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:35.040 07:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:35.040 07:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:35.040 [2024-11-20 07:22:39.531928] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:26:35.040 [2024-11-20 07:22:39.531988] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1340522 ] 00:26:35.040 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:35.040 Zero copy mechanism will not be used. 00:26:35.298 [2024-11-20 07:22:39.607956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.298 [2024-11-20 07:22:39.646959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:35.298 07:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:35.299 07:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:26:35.299 07:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:35.299 07:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:35.557 07:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:35.557 07:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.557 07:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:35.557 07:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.557 07:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:35.557 07:22:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:35.815 nvme0n1 00:26:35.815 07:22:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:35.815 07:22:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.815 07:22:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:35.815 07:22:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.815 07:22:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:35.815 07:22:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:36.075 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:36.075 Zero copy mechanism will not be used. 00:26:36.075 Running I/O for 2 seconds... 00:26:36.075 [2024-11-20 07:22:40.445314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.075 [2024-11-20 07:22:40.445386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.075 [2024-11-20 07:22:40.445416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.075 [2024-11-20 07:22:40.451257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.075 [2024-11-20 07:22:40.451328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.075 [2024-11-20 07:22:40.451351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.075 [2024-11-20 07:22:40.455758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.075 [2024-11-20 07:22:40.455827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.075 [2024-11-20 07:22:40.455849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.075 [2024-11-20 07:22:40.460293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.075 [2024-11-20 07:22:40.460363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.075 [2024-11-20 07:22:40.460384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.075 [2024-11-20 07:22:40.464807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.075 [2024-11-20 07:22:40.464873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.075 [2024-11-20 07:22:40.464893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.075 [2024-11-20 07:22:40.469360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.075 [2024-11-20 07:22:40.469432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.075 [2024-11-20 07:22:40.469450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.075 [2024-11-20 07:22:40.473819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.076 [2024-11-20 07:22:40.473894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.076 [2024-11-20 07:22:40.473913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.076 [2024-11-20 07:22:40.478221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.076 [2024-11-20 07:22:40.478285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.076 [2024-11-20 07:22:40.478303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.076 [2024-11-20 07:22:40.482638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.076 [2024-11-20 07:22:40.482707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.076 [2024-11-20 07:22:40.482726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.076 [2024-11-20 07:22:40.487036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.076 [2024-11-20 07:22:40.487098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.076 [2024-11-20 07:22:40.487116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.076 [2024-11-20 07:22:40.491465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.076 [2024-11-20 07:22:40.491537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.076 [2024-11-20 07:22:40.491556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.076 [2024-11-20 07:22:40.495938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.076 [2024-11-20 07:22:40.496075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.076 [2024-11-20 07:22:40.496094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.076 [2024-11-20 07:22:40.500584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.076 [2024-11-20 07:22:40.500670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.076 [2024-11-20 07:22:40.500689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.076 [2024-11-20 07:22:40.504933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.076 [2024-11-20 07:22:40.504998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.076 [2024-11-20 07:22:40.505018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.076 [2024-11-20 07:22:40.509334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.076 [2024-11-20 07:22:40.509389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.076 [2024-11-20 07:22:40.509408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.076 [2024-11-20 07:22:40.513704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.076 [2024-11-20 07:22:40.513768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.076 [2024-11-20 07:22:40.513787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.076 [2024-11-20 07:22:40.518100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.076 [2024-11-20 07:22:40.518160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.076 [2024-11-20 07:22:40.518179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.076 [2024-11-20 07:22:40.522436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.076 [2024-11-20 07:22:40.522522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.076 [2024-11-20 07:22:40.522541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.076 [2024-11-20 07:22:40.526765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.076 [2024-11-20 07:22:40.526848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.076 [2024-11-20 07:22:40.526866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.076 [2024-11-20 07:22:40.531157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.076 [2024-11-20 07:22:40.531239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.076 [2024-11-20 07:22:40.531259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.076 [2024-11-20 07:22:40.535499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.076 [2024-11-20 07:22:40.535566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.076 [2024-11-20 07:22:40.535585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.076 [2024-11-20 07:22:40.539873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.076 [2024-11-20 07:22:40.539931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.076 [2024-11-20 07:22:40.539961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.076 [2024-11-20 07:22:40.544171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.076 [2024-11-20 07:22:40.544234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.076 [2024-11-20 07:22:40.544253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.076 [2024-11-20 07:22:40.548556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.076 [2024-11-20 07:22:40.548616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.076 [2024-11-20 07:22:40.548635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.076 [2024-11-20 07:22:40.553304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.076 [2024-11-20 07:22:40.553402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.076 [2024-11-20 07:22:40.553421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.076 [2024-11-20 07:22:40.559228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.076 [2024-11-20 07:22:40.559396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.076 [2024-11-20 07:22:40.559415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.076 [2024-11-20 07:22:40.565186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.076 [2024-11-20 07:22:40.565348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.076 [2024-11-20 07:22:40.565366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.076 [2024-11-20 07:22:40.570402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.076 [2024-11-20 07:22:40.570497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.076 [2024-11-20 07:22:40.570515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.076 [2024-11-20 07:22:40.575940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.076 [2024-11-20 07:22:40.576109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.077 [2024-11-20 07:22:40.576128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.077 [2024-11-20 07:22:40.581386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.077 [2024-11-20 07:22:40.581511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.077 [2024-11-20 07:22:40.581530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.077 [2024-11-20 07:22:40.586815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.077 [2024-11-20 07:22:40.586898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.077 [2024-11-20 07:22:40.586917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.077 [2024-11-20 07:22:40.592693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.077 [2024-11-20 07:22:40.592749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.077 [2024-11-20 07:22:40.592768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.077 [2024-11-20 07:22:40.599095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.077 [2024-11-20 07:22:40.599179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.077 [2024-11-20 07:22:40.599198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.077 [2024-11-20 07:22:40.604425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.077 [2024-11-20 07:22:40.604516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.077 [2024-11-20 07:22:40.604534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.077 [2024-11-20 07:22:40.610083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.077 [2024-11-20 07:22:40.610170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.077 [2024-11-20 07:22:40.610188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.077 [2024-11-20 07:22:40.615550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.077 [2024-11-20 07:22:40.615642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.077 [2024-11-20 07:22:40.615660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.077 [2024-11-20 07:22:40.620409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.077 [2024-11-20 07:22:40.620481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.077 [2024-11-20 07:22:40.620506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.337 [2024-11-20 07:22:40.625157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.337 [2024-11-20 07:22:40.625219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.337 [2024-11-20 07:22:40.625240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.337 [2024-11-20 07:22:40.630115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.337 [2024-11-20 07:22:40.630215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.337 [2024-11-20 07:22:40.630236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.337 [2024-11-20 07:22:40.635323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.337 [2024-11-20 07:22:40.635389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.338 [2024-11-20 07:22:40.635409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.338 [2024-11-20 07:22:40.641230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.338 [2024-11-20 07:22:40.641299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.338 [2024-11-20 07:22:40.641319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.338 [2024-11-20 07:22:40.646659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.338 [2024-11-20 07:22:40.646730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.338 [2024-11-20 07:22:40.646749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.338 [2024-11-20 07:22:40.651738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.338 [2024-11-20 07:22:40.651827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.338 [2024-11-20 07:22:40.651845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.338 [2024-11-20 07:22:40.656487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.338 [2024-11-20 07:22:40.656542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.338 [2024-11-20 07:22:40.656561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.338 [2024-11-20 07:22:40.661189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.338 [2024-11-20 07:22:40.661304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.338 [2024-11-20 07:22:40.661322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.338 [2024-11-20 07:22:40.665588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.338 [2024-11-20 07:22:40.665684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.338 [2024-11-20 07:22:40.665702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.338 [2024-11-20 07:22:40.670103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.338 [2024-11-20 07:22:40.670165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.338 [2024-11-20 07:22:40.670184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.338 [2024-11-20 07:22:40.674791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.338 [2024-11-20 07:22:40.674848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.338 [2024-11-20 07:22:40.674871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.338 [2024-11-20 07:22:40.679609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.338 [2024-11-20 07:22:40.679723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.338 [2024-11-20 07:22:40.679741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.338 [2024-11-20 07:22:40.684208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.338 [2024-11-20 07:22:40.684262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.338 [2024-11-20 07:22:40.684280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.338 [2024-11-20 07:22:40.688786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.338 [2024-11-20 07:22:40.688845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.338 [2024-11-20 07:22:40.688864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.338 [2024-11-20 07:22:40.693350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.338 [2024-11-20 07:22:40.693462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.338 [2024-11-20 07:22:40.693481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.338 [2024-11-20 07:22:40.697976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.338 [2024-11-20 07:22:40.698057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.338 [2024-11-20 07:22:40.698075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.338 [2024-11-20 07:22:40.702378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.338 [2024-11-20 07:22:40.702476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.338 [2024-11-20 07:22:40.702494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.338 [2024-11-20 07:22:40.706939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.338 [2024-11-20 07:22:40.707004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.338 [2024-11-20 07:22:40.707023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.338 [2024-11-20 07:22:40.711708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.338 [2024-11-20 07:22:40.711801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.338 [2024-11-20 07:22:40.711819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.338 [2024-11-20 07:22:40.716636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.338 [2024-11-20 07:22:40.716713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.338 [2024-11-20 07:22:40.716731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.338 [2024-11-20 07:22:40.721761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.338 [2024-11-20 07:22:40.721813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.338 [2024-11-20 07:22:40.721831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.338 [2024-11-20 07:22:40.727060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.338 [2024-11-20 07:22:40.727158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.339 [2024-11-20 07:22:40.727176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.339 [2024-11-20 07:22:40.733173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.339 [2024-11-20 07:22:40.733232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.339 [2024-11-20 07:22:40.733251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.339 [2024-11-20 07:22:40.738114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.339 [2024-11-20 07:22:40.738211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.339 [2024-11-20 07:22:40.738229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.339 [2024-11-20 07:22:40.742829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.339 [2024-11-20 07:22:40.742911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.339 [2024-11-20 07:22:40.742929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.339 [2024-11-20 07:22:40.747375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.339 [2024-11-20 07:22:40.747434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.339 [2024-11-20 07:22:40.747452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.339 [2024-11-20 07:22:40.752130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.339 [2024-11-20 07:22:40.752204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.339 [2024-11-20 07:22:40.752223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.339 [2024-11-20 07:22:40.756806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.339 [2024-11-20 07:22:40.756897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.339 [2024-11-20 07:22:40.756916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.339 [2024-11-20 07:22:40.761473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.339 [2024-11-20 07:22:40.761547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.339 [2024-11-20 07:22:40.761565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.339 [2024-11-20 07:22:40.766162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.339 [2024-11-20 07:22:40.766262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.339 [2024-11-20 07:22:40.766280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.339 [2024-11-20 07:22:40.770762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.339 [2024-11-20 07:22:40.770815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.339 [2024-11-20 07:22:40.770833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.339 [2024-11-20 07:22:40.775242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.339 [2024-11-20 07:22:40.775349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.339 [2024-11-20 07:22:40.775369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.339 [2024-11-20 07:22:40.779909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.339 [2024-11-20 07:22:40.779968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.339 [2024-11-20 07:22:40.779987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.339 [2024-11-20 07:22:40.784448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.339 [2024-11-20 07:22:40.784556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.339 [2024-11-20 07:22:40.784574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.339 [2024-11-20 07:22:40.788936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.339 [2024-11-20 07:22:40.789025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.339 [2024-11-20 07:22:40.789043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.339 [2024-11-20 07:22:40.793391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.339 [2024-11-20 07:22:40.793468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.339 [2024-11-20 07:22:40.793487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.339 [2024-11-20 07:22:40.797873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.339 [2024-11-20 07:22:40.797945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.339 [2024-11-20 07:22:40.797974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.339 [2024-11-20 07:22:40.802325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.339 [2024-11-20 07:22:40.802380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.339 [2024-11-20 07:22:40.802399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.339 [2024-11-20 07:22:40.806662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.339 [2024-11-20 07:22:40.806795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.339 [2024-11-20 07:22:40.806813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.339 [2024-11-20 07:22:40.811599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.339 [2024-11-20 07:22:40.811694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.340 [2024-11-20 07:22:40.811713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.340 [2024-11-20 07:22:40.816593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.340 [2024-11-20 07:22:40.816662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.340 [2024-11-20 07:22:40.816680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.340 [2024-11-20 07:22:40.821684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.340 [2024-11-20 07:22:40.821781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.340 [2024-11-20 07:22:40.821800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.340 [2024-11-20 07:22:40.826735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.340 [2024-11-20 07:22:40.826833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.340 [2024-11-20 07:22:40.826852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.340 [2024-11-20 07:22:40.831877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.340 [2024-11-20 07:22:40.831968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.340 [2024-11-20 07:22:40.831987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.340 [2024-11-20 07:22:40.837106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.340 [2024-11-20 07:22:40.837240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.340 [2024-11-20 07:22:40.837259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.340 [2024-11-20 07:22:40.842169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.340 [2024-11-20 07:22:40.842318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.340 [2024-11-20 07:22:40.842336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.340 [2024-11-20 07:22:40.847220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.340 [2024-11-20 07:22:40.847319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.340 [2024-11-20 07:22:40.847337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.340 [2024-11-20 07:22:40.852269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.340 [2024-11-20 07:22:40.852430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.340 [2024-11-20 07:22:40.852448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.340 [2024-11-20 07:22:40.857447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.340 [2024-11-20 07:22:40.857504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.340 [2024-11-20 07:22:40.857523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.340 [2024-11-20 07:22:40.862154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.340 [2024-11-20 07:22:40.862238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.340 [2024-11-20 07:22:40.862256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.340 [2024-11-20 07:22:40.867293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.340 [2024-11-20 07:22:40.867587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.340 [2024-11-20 07:22:40.867606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.340 [2024-11-20 07:22:40.873373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.340 [2024-11-20 07:22:40.873621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.340 [2024-11-20 07:22:40.873641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.340 [2024-11-20 07:22:40.878137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.340 [2024-11-20 07:22:40.878380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.340 [2024-11-20 07:22:40.878400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.340 [2024-11-20 07:22:40.883199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.340 [2024-11-20 07:22:40.883465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.340 [2024-11-20 07:22:40.883490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.601 [2024-11-20 07:22:40.887522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.601 [2024-11-20 07:22:40.887789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.601 [2024-11-20 07:22:40.887811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.601 [2024-11-20 07:22:40.891925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.601 [2024-11-20 07:22:40.892196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.601 [2024-11-20 07:22:40.892217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.601 [2024-11-20 07:22:40.896468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.601 [2024-11-20 07:22:40.896737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.601 [2024-11-20 07:22:40.896757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.601 [2024-11-20 07:22:40.900647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.601 [2024-11-20 07:22:40.900920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.601 [2024-11-20 07:22:40.900940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.601 [2024-11-20 07:22:40.904918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.601 [2024-11-20 07:22:40.905183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.601 [2024-11-20 07:22:40.905214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.601 [2024-11-20 07:22:40.909320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.601 [2024-11-20 07:22:40.909563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.601 [2024-11-20 07:22:40.909583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.601 [2024-11-20 07:22:40.914213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.601 [2024-11-20 07:22:40.914467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.601 [2024-11-20 07:22:40.914486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.601 [2024-11-20 07:22:40.919099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.601 [2024-11-20 07:22:40.919361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.601 [2024-11-20 07:22:40.919380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.601 [2024-11-20 07:22:40.923636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.601 [2024-11-20 07:22:40.923885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.601 [2024-11-20 07:22:40.923904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.601 [2024-11-20 07:22:40.928058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.601 [2024-11-20 07:22:40.928329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.601 [2024-11-20 07:22:40.928349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.601 [2024-11-20 07:22:40.932363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.601 [2024-11-20 07:22:40.932640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.601 [2024-11-20 07:22:40.932660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.601 [2024-11-20 07:22:40.936877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.601 [2024-11-20 07:22:40.937155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.602 [2024-11-20 07:22:40.937176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.602 [2024-11-20 07:22:40.941025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.602 [2024-11-20 07:22:40.941304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.602 [2024-11-20 07:22:40.941323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.602 [2024-11-20 07:22:40.945173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.602 [2024-11-20 07:22:40.945442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.602 [2024-11-20 07:22:40.945462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.602 [2024-11-20 07:22:40.949295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.602 [2024-11-20 07:22:40.949557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.602 [2024-11-20 07:22:40.949577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.602 [2024-11-20 07:22:40.953443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.602 [2024-11-20 07:22:40.953700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.602 [2024-11-20 07:22:40.953720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.602 [2024-11-20 07:22:40.957586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.602 [2024-11-20 07:22:40.957845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.602 [2024-11-20 07:22:40.957864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.602 [2024-11-20 07:22:40.961741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.602 [2024-11-20 07:22:40.962003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.602 [2024-11-20 07:22:40.962027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.602 [2024-11-20 07:22:40.965915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.602 [2024-11-20 07:22:40.966190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.602 [2024-11-20 07:22:40.966210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.602 [2024-11-20 07:22:40.970025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.602 [2024-11-20 07:22:40.970295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.602 [2024-11-20 07:22:40.970316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.602 [2024-11-20 07:22:40.974133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.602 [2024-11-20 07:22:40.974407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.602 [2024-11-20 07:22:40.974427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.602 [2024-11-20 07:22:40.978246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.602 [2024-11-20 07:22:40.978528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.602 [2024-11-20 07:22:40.978548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.602 [2024-11-20 07:22:40.982398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.602 [2024-11-20 07:22:40.982670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.602 [2024-11-20 07:22:40.982689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.602 [2024-11-20 07:22:40.986491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.602 [2024-11-20 07:22:40.986751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.602 [2024-11-20 07:22:40.986770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.602 [2024-11-20 07:22:40.990793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.602 [2024-11-20 07:22:40.991080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.602 [2024-11-20 07:22:40.991100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.602 [2024-11-20 07:22:40.995287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.602 [2024-11-20 07:22:40.995563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.602 [2024-11-20 07:22:40.995583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.602 [2024-11-20 07:22:41.000640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.602 [2024-11-20 07:22:41.000889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.602 [2024-11-20 07:22:41.000908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.602 [2024-11-20 07:22:41.005364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.602 [2024-11-20 07:22:41.005621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.602 [2024-11-20 07:22:41.005640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.602 [2024-11-20 07:22:41.010037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.602 [2024-11-20 07:22:41.010280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.602 [2024-11-20 07:22:41.010300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.602 [2024-11-20 07:22:41.014591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.602 [2024-11-20 07:22:41.014848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.602 [2024-11-20 07:22:41.014867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.602 [2024-11-20 07:22:41.019070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.602 [2024-11-20 07:22:41.019349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.602 [2024-11-20 07:22:41.019368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.602 [2024-11-20 07:22:41.023518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.602 [2024-11-20 07:22:41.023794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.602 [2024-11-20 07:22:41.023814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.602 [2024-11-20 07:22:41.027978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.602 [2024-11-20 07:22:41.028245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.602 [2024-11-20 07:22:41.028264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.602 [2024-11-20 07:22:41.032396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.602 [2024-11-20 07:22:41.032648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.603 [2024-11-20 07:22:41.032667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.603 [2024-11-20 07:22:41.036785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.603 [2024-11-20 07:22:41.037042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.603 [2024-11-20 07:22:41.037061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.603 [2024-11-20 07:22:41.040942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.603 [2024-11-20 07:22:41.041206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.603 [2024-11-20 07:22:41.041225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.603 [2024-11-20 07:22:41.045149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.603 [2024-11-20 07:22:41.045398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.603 [2024-11-20 07:22:41.045417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.603 [2024-11-20 07:22:41.049337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.603 [2024-11-20 07:22:41.049588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.603 [2024-11-20 07:22:41.049607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.603 [2024-11-20 07:22:41.053508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.603 [2024-11-20 07:22:41.053758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.603 [2024-11-20 07:22:41.053777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.603 [2024-11-20 07:22:41.057688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.603 [2024-11-20 07:22:41.057932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.603 [2024-11-20 07:22:41.057957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.603 [2024-11-20 07:22:41.061918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.603 [2024-11-20 07:22:41.062189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.603 [2024-11-20 07:22:41.062208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.603 [2024-11-20 07:22:41.066778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.603 [2024-11-20 07:22:41.067058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.603 [2024-11-20 07:22:41.067077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.603 [2024-11-20 07:22:41.072238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.603 [2024-11-20 07:22:41.072552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.603 [2024-11-20 07:22:41.072571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.603 [2024-11-20 07:22:41.078416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.603 [2024-11-20 07:22:41.078761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.603 [2024-11-20 07:22:41.078784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.603 [2024-11-20 07:22:41.085322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.603 [2024-11-20 07:22:41.085614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.603 [2024-11-20 07:22:41.085633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.603 [2024-11-20 07:22:41.091936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.603 [2024-11-20 07:22:41.092554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.603 [2024-11-20 07:22:41.092573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.603 [2024-11-20 07:22:41.098971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.603 [2024-11-20 07:22:41.099350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.603 [2024-11-20 07:22:41.099369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.603 [2024-11-20 07:22:41.105622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.603 [2024-11-20 07:22:41.105953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.603 [2024-11-20 07:22:41.105972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.603 [2024-11-20 07:22:41.112766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.603 [2024-11-20 07:22:41.113059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.603 [2024-11-20 07:22:41.113079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.603 [2024-11-20 07:22:41.119210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.603 [2024-11-20 07:22:41.119523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.603 [2024-11-20 07:22:41.119542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.603 [2024-11-20 07:22:41.126434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.603 [2024-11-20 07:22:41.126757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.603 [2024-11-20 07:22:41.126776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.603 [2024-11-20 07:22:41.133177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.603 [2024-11-20 07:22:41.133470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.603 [2024-11-20 07:22:41.133490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.603 [2024-11-20 07:22:41.140188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.603 [2024-11-20 07:22:41.140495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.603 [2024-11-20 07:22:41.140515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.603 [2024-11-20 07:22:41.146897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.603 [2024-11-20 07:22:41.147226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.603 [2024-11-20 07:22:41.147248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.864 [2024-11-20 07:22:41.153212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.864 [2024-11-20 07:22:41.153492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.864 [2024-11-20 07:22:41.153513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.864 [2024-11-20 07:22:41.159526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.864 [2024-11-20 07:22:41.159858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.864 [2024-11-20 07:22:41.159879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.864 [2024-11-20 07:22:41.164981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.864 [2024-11-20 07:22:41.165264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.864 [2024-11-20 07:22:41.165285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.864 [2024-11-20 07:22:41.169501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.864 [2024-11-20 07:22:41.169750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.864 [2024-11-20 07:22:41.169770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.864 [2024-11-20 07:22:41.173852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.864 [2024-11-20 07:22:41.174116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.864 [2024-11-20 07:22:41.174136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.864 [2024-11-20 07:22:41.178177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.864 [2024-11-20 07:22:41.178428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.864 [2024-11-20 07:22:41.178447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.864 [2024-11-20 07:22:41.182459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.864 [2024-11-20 07:22:41.182720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.864 [2024-11-20 07:22:41.182740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.864 [2024-11-20 07:22:41.186713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.864 [2024-11-20 07:22:41.186977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.864 [2024-11-20 07:22:41.186997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.864 [2024-11-20 07:22:41.191007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.864 [2024-11-20 07:22:41.191261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.864 [2024-11-20 07:22:41.191282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.864 [2024-11-20 07:22:41.195207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.864 [2024-11-20 07:22:41.195466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.864 [2024-11-20 07:22:41.195485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.864 [2024-11-20 07:22:41.199478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.864 [2024-11-20 07:22:41.199758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.864 [2024-11-20 07:22:41.199777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.864 [2024-11-20 07:22:41.203786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.864 [2024-11-20 07:22:41.204041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.864 [2024-11-20 07:22:41.204060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.864 [2024-11-20 07:22:41.208077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.864 [2024-11-20 07:22:41.208342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.864 [2024-11-20 07:22:41.208362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.864 [2024-11-20 07:22:41.212312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.864 [2024-11-20 07:22:41.212565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.864 [2024-11-20 07:22:41.212585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.864 [2024-11-20 07:22:41.216717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.864 [2024-11-20 07:22:41.216988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.864 [2024-11-20 07:22:41.217008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.864 [2024-11-20 07:22:41.221243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.864 [2024-11-20 07:22:41.221516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.864 [2024-11-20 07:22:41.221540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.864 [2024-11-20 07:22:41.225506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.864 [2024-11-20 07:22:41.225778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.864 [2024-11-20 07:22:41.225797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.865 [2024-11-20 07:22:41.229723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.865 [2024-11-20 07:22:41.230002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.865 [2024-11-20 07:22:41.230021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.865 [2024-11-20 07:22:41.233923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.865 [2024-11-20 07:22:41.234181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.865 [2024-11-20 07:22:41.234201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.865 [2024-11-20 07:22:41.238100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.865 [2024-11-20 07:22:41.238373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.865 [2024-11-20 07:22:41.238393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.865 [2024-11-20 07:22:41.242483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.865 [2024-11-20 07:22:41.242747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.865 [2024-11-20 07:22:41.242767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.865 [2024-11-20 07:22:41.246704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.865 [2024-11-20 07:22:41.246982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.865 [2024-11-20 07:22:41.247001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.865 [2024-11-20 07:22:41.250945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.865 [2024-11-20 07:22:41.251228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.865 [2024-11-20 07:22:41.251247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.865 [2024-11-20 07:22:41.255146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.865 [2024-11-20 07:22:41.255409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.865 [2024-11-20 07:22:41.255428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.865 [2024-11-20 07:22:41.259439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.865 [2024-11-20 07:22:41.259685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.865 [2024-11-20 07:22:41.259704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.865 [2024-11-20 07:22:41.264042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.865 [2024-11-20 07:22:41.264285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.865 [2024-11-20 07:22:41.264304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.865 [2024-11-20 07:22:41.269419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.865 [2024-11-20 07:22:41.269653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.865 [2024-11-20 07:22:41.269672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.865 [2024-11-20 07:22:41.274620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.865 [2024-11-20 07:22:41.274890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.865 [2024-11-20 07:22:41.274909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.865 [2024-11-20 07:22:41.279134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.865 [2024-11-20 07:22:41.279414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.865 [2024-11-20 07:22:41.279433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.865 [2024-11-20 07:22:41.283659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.865 [2024-11-20 07:22:41.283915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.865 [2024-11-20 07:22:41.283934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.865 [2024-11-20 07:22:41.288165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.865 [2024-11-20 07:22:41.288412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.865 [2024-11-20 07:22:41.288431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.865 [2024-11-20 07:22:41.292663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.865 [2024-11-20 07:22:41.292912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.865 [2024-11-20 07:22:41.292932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.865 [2024-11-20 07:22:41.297107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.865 [2024-11-20 07:22:41.297360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.865 [2024-11-20 07:22:41.297380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.865 [2024-11-20 07:22:41.301454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.865 [2024-11-20 07:22:41.301724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.865 [2024-11-20 07:22:41.301744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.865 [2024-11-20 07:22:41.305886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.865 [2024-11-20 07:22:41.306177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.865 [2024-11-20 07:22:41.306196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.865 [2024-11-20 07:22:41.310434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.865 [2024-11-20 07:22:41.310676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.865 [2024-11-20 07:22:41.310695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.865 [2024-11-20 07:22:41.315515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.865 [2024-11-20 07:22:41.315738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.865 [2024-11-20 07:22:41.315757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.865 [2024-11-20 07:22:41.320448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.866 [2024-11-20 07:22:41.320692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.866 [2024-11-20 07:22:41.320711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.866 [2024-11-20 07:22:41.325640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.866 [2024-11-20 07:22:41.325908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.866 [2024-11-20 07:22:41.325927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.866 [2024-11-20 07:22:41.330692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.866 [2024-11-20 07:22:41.330943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.866 [2024-11-20 07:22:41.330969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.866 [2024-11-20 07:22:41.335670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.866 [2024-11-20 07:22:41.335921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.866 [2024-11-20 07:22:41.335940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.866 [2024-11-20 07:22:41.340269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.866 [2024-11-20 07:22:41.340519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.866 [2024-11-20 07:22:41.340543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.866 [2024-11-20 07:22:41.344885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.866 [2024-11-20 07:22:41.345163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.866 [2024-11-20 07:22:41.345183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.866 [2024-11-20 07:22:41.349747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.866 [2024-11-20 07:22:41.350038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.866 [2024-11-20 07:22:41.350058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.866 [2024-11-20 07:22:41.354484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.866 [2024-11-20 07:22:41.354721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.866 [2024-11-20 07:22:41.354740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.866 [2024-11-20 07:22:41.359138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.866 [2024-11-20 07:22:41.359388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.866 [2024-11-20 07:22:41.359407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.866 [2024-11-20 07:22:41.363519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.866 [2024-11-20 07:22:41.363770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.866 [2024-11-20 07:22:41.363790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.866 [2024-11-20 07:22:41.368534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.866 [2024-11-20 07:22:41.368786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.866 [2024-11-20 07:22:41.368806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.866 [2024-11-20 07:22:41.373514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.866 [2024-11-20 07:22:41.373773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.866 [2024-11-20 07:22:41.373793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.866 [2024-11-20 07:22:41.378772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.866 [2024-11-20 07:22:41.379056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.866 [2024-11-20 07:22:41.379076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.866 [2024-11-20 07:22:41.385006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.866 [2024-11-20 07:22:41.385333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.866 [2024-11-20 07:22:41.385354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.866 [2024-11-20 07:22:41.391851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.866 [2024-11-20 07:22:41.392094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.866 [2024-11-20 07:22:41.392115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:36.866 [2024-11-20 07:22:41.397412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.866 [2024-11-20 07:22:41.397679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.866 [2024-11-20 07:22:41.397699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:36.866 [2024-11-20 07:22:41.402440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.866 [2024-11-20 07:22:41.402694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.866 [2024-11-20 07:22:41.402715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.866 [2024-11-20 07:22:41.406702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.866 [2024-11-20 07:22:41.406966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.866 [2024-11-20 07:22:41.406986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.866 [2024-11-20 07:22:41.410984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:36.866 [2024-11-20 07:22:41.411236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.866 [2024-11-20 07:22:41.411258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.127 [2024-11-20 07:22:41.415265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.127 [2024-11-20 07:22:41.415520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.127 [2024-11-20 07:22:41.415542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.127 [2024-11-20 07:22:41.419522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.127 [2024-11-20 07:22:41.419783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.127 [2024-11-20 07:22:41.419803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.127 [2024-11-20 07:22:41.423722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.127 [2024-11-20 07:22:41.423999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.127 [2024-11-20 07:22:41.424019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.127 [2024-11-20 07:22:41.427881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.127 [2024-11-20 07:22:41.428146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.127 [2024-11-20 07:22:41.428166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.127 [2024-11-20 07:22:41.432032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.127 [2024-11-20 07:22:41.432285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.127 [2024-11-20 07:22:41.432305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.127 [2024-11-20 07:22:41.436214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.127 [2024-11-20 07:22:41.436476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.127 [2024-11-20 07:22:41.436496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.127 [2024-11-20 07:22:41.440374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.127 [2024-11-20 07:22:41.440637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.127 [2024-11-20 07:22:41.440658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.127 6435.00 IOPS, 804.38 MiB/s [2024-11-20T06:22:41.683Z] [2024-11-20 07:22:41.445562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.127 [2024-11-20 07:22:41.445846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.127 [2024-11-20 07:22:41.445865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.127 [2024-11-20 07:22:41.450026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.127 [2024-11-20 07:22:41.450283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.127 [2024-11-20 07:22:41.450303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.127 [2024-11-20 07:22:41.454194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.127 [2024-11-20 07:22:41.454482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.127 [2024-11-20 07:22:41.454502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.127 [2024-11-20 07:22:41.458211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.127 [2024-11-20 07:22:41.458434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.127 [2024-11-20 07:22:41.458454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.127 [2024-11-20 07:22:41.462172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.127 [2024-11-20 07:22:41.462390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.127 [2024-11-20 07:22:41.462414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.127 [2024-11-20 07:22:41.466147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.127 [2024-11-20 07:22:41.466373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.127 [2024-11-20 07:22:41.466392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.127 [2024-11-20 07:22:41.469981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.127 [2024-11-20 07:22:41.470186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.127 [2024-11-20 07:22:41.470205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.127 [2024-11-20 07:22:41.473775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.127 [2024-11-20 07:22:41.473983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.127 [2024-11-20 07:22:41.474001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.127 [2024-11-20 07:22:41.477541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.127 [2024-11-20 07:22:41.477750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.127 [2024-11-20 07:22:41.477770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.127 [2024-11-20 07:22:41.481326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.127 [2024-11-20 07:22:41.481534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.127 [2024-11-20 07:22:41.481553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.127 [2024-11-20 07:22:41.485348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.127 [2024-11-20 07:22:41.485558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.127 [2024-11-20 07:22:41.485578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.127 [2024-11-20 07:22:41.489211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.127 [2024-11-20 07:22:41.489415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.127 [2024-11-20 07:22:41.489435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.128 [2024-11-20 07:22:41.492983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.128 [2024-11-20 07:22:41.493200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.128 [2024-11-20 07:22:41.493221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.128 [2024-11-20 07:22:41.496990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.128 [2024-11-20 07:22:41.497192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.128 [2024-11-20 07:22:41.497211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.128 [2024-11-20 07:22:41.500899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.128 [2024-11-20 07:22:41.501116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.128 [2024-11-20 07:22:41.501136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.128 [2024-11-20 07:22:41.504719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.128 [2024-11-20 07:22:41.504929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.128 [2024-11-20 07:22:41.504955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.128 [2024-11-20 07:22:41.508647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.128 [2024-11-20 07:22:41.508850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.128 [2024-11-20 07:22:41.508871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.128 [2024-11-20 07:22:41.512727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.128 [2024-11-20 07:22:41.512955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.128 [2024-11-20 07:22:41.512975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.128 [2024-11-20 07:22:41.517321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.128 [2024-11-20 07:22:41.517480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.128 [2024-11-20 07:22:41.517499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.128 [2024-11-20 07:22:41.521789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.128 [2024-11-20 07:22:41.522002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.128 [2024-11-20 07:22:41.522020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.128 [2024-11-20 07:22:41.526721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.128 [2024-11-20 07:22:41.527119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.128 [2024-11-20 07:22:41.527139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.128 [2024-11-20 07:22:41.531468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.128 [2024-11-20 07:22:41.531679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.128 [2024-11-20 07:22:41.531732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.128 [2024-11-20 07:22:41.535905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.128 [2024-11-20 07:22:41.536150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.128 [2024-11-20 07:22:41.536170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.128 [2024-11-20 07:22:41.539929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.128 [2024-11-20 07:22:41.540160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.128 [2024-11-20 07:22:41.540179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.128 [2024-11-20 07:22:41.543806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.128 [2024-11-20 07:22:41.544019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.128 [2024-11-20 07:22:41.544038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.128 [2024-11-20 07:22:41.547784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.128 [2024-11-20 07:22:41.548001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.128 [2024-11-20 07:22:41.548020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.128 [2024-11-20 07:22:41.551811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.128 [2024-11-20 07:22:41.552029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.128 [2024-11-20 07:22:41.552048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.128 [2024-11-20 07:22:41.555793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.128 [2024-11-20 07:22:41.556005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.128 [2024-11-20 07:22:41.556023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.128 [2024-11-20 07:22:41.559812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.128 [2024-11-20 07:22:41.560023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.128 [2024-11-20 07:22:41.560041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.128 [2024-11-20 07:22:41.563809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.128 [2024-11-20 07:22:41.564043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.128 [2024-11-20 07:22:41.564062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.128 [2024-11-20 07:22:41.567749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.128 [2024-11-20 07:22:41.567986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.128 [2024-11-20 07:22:41.568008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.128 [2024-11-20 07:22:41.571734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.128 [2024-11-20 07:22:41.571943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.128 [2024-11-20 07:22:41.571971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.128 [2024-11-20 07:22:41.575649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.128 [2024-11-20 07:22:41.575854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.128 [2024-11-20 07:22:41.575873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.128 [2024-11-20 07:22:41.579678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.128 [2024-11-20 07:22:41.579885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.128 [2024-11-20 07:22:41.579904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.128 [2024-11-20 07:22:41.583683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.129 [2024-11-20 07:22:41.583844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.129 [2024-11-20 07:22:41.583862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.129 [2024-11-20 07:22:41.587579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.129 [2024-11-20 07:22:41.587773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.129 [2024-11-20 07:22:41.587791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.129 [2024-11-20 07:22:41.591532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.129 [2024-11-20 07:22:41.591702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.129 [2024-11-20 07:22:41.591721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.129 [2024-11-20 07:22:41.595413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.129 [2024-11-20 07:22:41.595610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.129 [2024-11-20 07:22:41.595635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.129 [2024-11-20 07:22:41.599304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.129 [2024-11-20 07:22:41.599503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.129 [2024-11-20 07:22:41.599523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.129 [2024-11-20 07:22:41.603202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.129 [2024-11-20 07:22:41.603391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.129 [2024-11-20 07:22:41.603409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.129 [2024-11-20 07:22:41.607138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.129 [2024-11-20 07:22:41.607323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.129 [2024-11-20 07:22:41.607342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.129 [2024-11-20 07:22:41.610890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.129 [2024-11-20 07:22:41.611084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.129 [2024-11-20 07:22:41.611102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.129 [2024-11-20 07:22:41.614799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.129 [2024-11-20 07:22:41.615012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.129 [2024-11-20 07:22:41.615031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.129 [2024-11-20 07:22:41.619247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.129 [2024-11-20 07:22:41.619404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.129 [2024-11-20 07:22:41.619422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.129 [2024-11-20 07:22:41.623702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.129 [2024-11-20 07:22:41.623873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.129 [2024-11-20 07:22:41.623891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.129 [2024-11-20 07:22:41.627649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.129 [2024-11-20 07:22:41.627824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.129 [2024-11-20 07:22:41.627842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.129 [2024-11-20 07:22:41.631547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.129 [2024-11-20 07:22:41.631745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.129 [2024-11-20 07:22:41.631764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.129 [2024-11-20 07:22:41.635347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.129 [2024-11-20 07:22:41.635554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.129 [2024-11-20 07:22:41.635574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.129 [2024-11-20 07:22:41.639303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.129 [2024-11-20 07:22:41.639492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.129 [2024-11-20 07:22:41.639510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.129 [2024-11-20 07:22:41.644039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.129 [2024-11-20 07:22:41.644199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.129 [2024-11-20 07:22:41.644217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.129 [2024-11-20 07:22:41.648504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.129 [2024-11-20 07:22:41.648687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.129 [2024-11-20 07:22:41.648705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.129 [2024-11-20 07:22:41.652517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.129 [2024-11-20 07:22:41.652699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.129 [2024-11-20 07:22:41.652717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.129 [2024-11-20 07:22:41.656480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.129 [2024-11-20 07:22:41.656659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.129 [2024-11-20 07:22:41.656677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.129 [2024-11-20 07:22:41.660467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.129 [2024-11-20 07:22:41.660660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.129 [2024-11-20 07:22:41.660678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.129 [2024-11-20 07:22:41.664472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.130 [2024-11-20 07:22:41.664657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.130 [2024-11-20 07:22:41.664675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.130 [2024-11-20 07:22:41.668248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.130 [2024-11-20 07:22:41.668433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.130 [2024-11-20 07:22:41.668450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.130 [2024-11-20 07:22:41.672298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.130 [2024-11-20 07:22:41.672494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.130 [2024-11-20 07:22:41.672524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.390 [2024-11-20 07:22:41.676765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.390 [2024-11-20 07:22:41.676942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.390 [2024-11-20 07:22:41.676970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.390 [2024-11-20 07:22:41.681399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.390 [2024-11-20 07:22:41.681586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.390 [2024-11-20 07:22:41.681608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.390 [2024-11-20 07:22:41.685532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.390 [2024-11-20 07:22:41.685735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.390 [2024-11-20 07:22:41.685756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.390 [2024-11-20 07:22:41.689472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.390 [2024-11-20 07:22:41.689682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.390 [2024-11-20 07:22:41.689701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.390 [2024-11-20 07:22:41.693545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.390 [2024-11-20 07:22:41.693750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.390 [2024-11-20 07:22:41.693770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.390 [2024-11-20 07:22:41.697496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.390 [2024-11-20 07:22:41.697696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.390 [2024-11-20 07:22:41.697721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.390 [2024-11-20 07:22:41.701727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.390 [2024-11-20 07:22:41.701941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.390 [2024-11-20 07:22:41.701965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.390 [2024-11-20 07:22:41.705808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.390 [2024-11-20 07:22:41.706005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.390 [2024-11-20 07:22:41.706024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.390 [2024-11-20 07:22:41.709717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.390 [2024-11-20 07:22:41.709902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.390 [2024-11-20 07:22:41.709921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.390 [2024-11-20 07:22:41.713805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.390 [2024-11-20 07:22:41.714013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.390 [2024-11-20 07:22:41.714032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.390 [2024-11-20 07:22:41.717747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.390 [2024-11-20 07:22:41.717929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.390 [2024-11-20 07:22:41.717952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.390 [2024-11-20 07:22:41.721756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.390 [2024-11-20 07:22:41.721944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.390 [2024-11-20 07:22:41.721971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.390 [2024-11-20 07:22:41.725822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.390 [2024-11-20 07:22:41.726017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.391 [2024-11-20 07:22:41.726035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.391 [2024-11-20 07:22:41.729860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.391 [2024-11-20 07:22:41.730055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.391 [2024-11-20 07:22:41.730074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.391 [2024-11-20 07:22:41.733830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.391 [2024-11-20 07:22:41.734022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.391 [2024-11-20 07:22:41.734041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.391 [2024-11-20 07:22:41.738071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.391 [2024-11-20 07:22:41.738268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.391 [2024-11-20 07:22:41.738293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.391 [2024-11-20 07:22:41.742006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.391 [2024-11-20 07:22:41.742227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.391 [2024-11-20 07:22:41.742246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.391 [2024-11-20 07:22:41.746000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.391 [2024-11-20 07:22:41.746179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.391 [2024-11-20 07:22:41.746198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.391 [2024-11-20 07:22:41.749970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.391 [2024-11-20 07:22:41.750168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.391 [2024-11-20 07:22:41.750186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.391 [2024-11-20 07:22:41.754050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.391 [2024-11-20 07:22:41.754273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.391 [2024-11-20 07:22:41.754291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.391 [2024-11-20 07:22:41.758001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.391 [2024-11-20 07:22:41.758207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.391 [2024-11-20 07:22:41.758227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.391 [2024-11-20 07:22:41.762002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.391 [2024-11-20 07:22:41.762191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.391 [2024-11-20 07:22:41.762208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.391 [2024-11-20 07:22:41.765935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.391 [2024-11-20 07:22:41.766147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.391 [2024-11-20 07:22:41.766165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.391 [2024-11-20 07:22:41.769888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.391 [2024-11-20 07:22:41.770120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.391 [2024-11-20 07:22:41.770140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.391 [2024-11-20 07:22:41.773832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.391 [2024-11-20 07:22:41.774037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.391 [2024-11-20 07:22:41.774055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.391 [2024-11-20 07:22:41.777772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.391 [2024-11-20 07:22:41.777942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.391 [2024-11-20 07:22:41.777969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.391 [2024-11-20 07:22:41.781725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.391 [2024-11-20 07:22:41.781898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.391 [2024-11-20 07:22:41.781916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.391 [2024-11-20 07:22:41.785697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.391 [2024-11-20 07:22:41.785877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.391 [2024-11-20 07:22:41.785895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.391 [2024-11-20 07:22:41.789684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.391 [2024-11-20 07:22:41.789869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.391 [2024-11-20 07:22:41.789886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.391 [2024-11-20 07:22:41.793669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.391 [2024-11-20 07:22:41.793851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.391 [2024-11-20 07:22:41.793869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.391 [2024-11-20 07:22:41.797763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.391 [2024-11-20 07:22:41.797945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.391 [2024-11-20 07:22:41.797968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.391 [2024-11-20 07:22:41.803091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.391 [2024-11-20 07:22:41.803383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.391 [2024-11-20 07:22:41.803402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.391 [2024-11-20 07:22:41.808728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.391 [2024-11-20 07:22:41.808920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.391 [2024-11-20 07:22:41.808938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.391 [2024-11-20 07:22:41.814497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.391 [2024-11-20 07:22:41.814686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.391 [2024-11-20 07:22:41.814705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.391 [2024-11-20 07:22:41.821103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.391 [2024-11-20 07:22:41.821309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.392 [2024-11-20 07:22:41.821338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.392 [2024-11-20 07:22:41.826116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.392 [2024-11-20 07:22:41.826334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.392 [2024-11-20 07:22:41.826354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.392 [2024-11-20 07:22:41.830110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.392 [2024-11-20 07:22:41.830308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.392 [2024-11-20 07:22:41.830325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.392 [2024-11-20 07:22:41.834082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.392 [2024-11-20 07:22:41.834279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.392 [2024-11-20 07:22:41.834305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.392 [2024-11-20 07:22:41.837879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.392 [2024-11-20 07:22:41.838088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.392 [2024-11-20 07:22:41.838107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.392 [2024-11-20 07:22:41.841858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.392 [2024-11-20 07:22:41.842066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.392 [2024-11-20 07:22:41.842085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.392 [2024-11-20 07:22:41.846254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.392 [2024-11-20 07:22:41.846463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.392 [2024-11-20 07:22:41.846481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.392 [2024-11-20 07:22:41.850752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.392 [2024-11-20 07:22:41.850946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.392 [2024-11-20 07:22:41.850969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.392 [2024-11-20 07:22:41.855001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.392 [2024-11-20 07:22:41.855187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.392 [2024-11-20 07:22:41.855216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.392 [2024-11-20 07:22:41.859018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.392 [2024-11-20 07:22:41.859224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.392 [2024-11-20 07:22:41.859244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.392 [2024-11-20 07:22:41.863253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.392 [2024-11-20 07:22:41.863442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.392 [2024-11-20 07:22:41.863460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.392 [2024-11-20 07:22:41.867307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.392 [2024-11-20 07:22:41.867478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.392 [2024-11-20 07:22:41.867495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.392 [2024-11-20 07:22:41.871231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.392 [2024-11-20 07:22:41.871432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.392 [2024-11-20 07:22:41.871451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.392 [2024-11-20 07:22:41.875056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.392 [2024-11-20 07:22:41.875254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.392 [2024-11-20 07:22:41.875278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.392 [2024-11-20 07:22:41.878960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.392 [2024-11-20 07:22:41.879162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.392 [2024-11-20 07:22:41.879186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.392 [2024-11-20 07:22:41.882739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.392 [2024-11-20 07:22:41.882943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.392 [2024-11-20 07:22:41.882970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.392 [2024-11-20 07:22:41.886632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.392 [2024-11-20 07:22:41.886841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.392 [2024-11-20 07:22:41.886860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.392 [2024-11-20 07:22:41.890734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.392 [2024-11-20 07:22:41.890934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.392 [2024-11-20 07:22:41.890959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.392 [2024-11-20 07:22:41.895059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.392 [2024-11-20 07:22:41.895245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.392 [2024-11-20 07:22:41.895263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.392 [2024-11-20 07:22:41.899747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.392 [2024-11-20 07:22:41.899926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.392 [2024-11-20 07:22:41.899944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.392 [2024-11-20 07:22:41.903815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.392 [2024-11-20 07:22:41.904003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.392 [2024-11-20 07:22:41.904021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.392 [2024-11-20 07:22:41.908156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.392 [2024-11-20 07:22:41.908326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.392 [2024-11-20 07:22:41.908343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.392 [2024-11-20 07:22:41.912415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.392 [2024-11-20 07:22:41.912606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.392 [2024-11-20 07:22:41.912624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.393 [2024-11-20 07:22:41.916699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.393 [2024-11-20 07:22:41.916913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.393 [2024-11-20 07:22:41.916933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.393 [2024-11-20 07:22:41.921141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.393 [2024-11-20 07:22:41.921332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.393 [2024-11-20 07:22:41.921350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.393 [2024-11-20 07:22:41.925103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.393 [2024-11-20 07:22:41.925291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.393 [2024-11-20 07:22:41.925309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.393 [2024-11-20 07:22:41.929093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.393 [2024-11-20 07:22:41.929269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.393 [2024-11-20 07:22:41.929290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.393 [2024-11-20 07:22:41.933118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.393 [2024-11-20 07:22:41.933317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.393 [2024-11-20 07:22:41.933335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.393 [2024-11-20 07:22:41.937195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.393 [2024-11-20 07:22:41.937384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.393 [2024-11-20 07:22:41.937404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.653 [2024-11-20 07:22:41.941306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.653 [2024-11-20 07:22:41.941495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.653 [2024-11-20 07:22:41.941515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.653 [2024-11-20 07:22:41.945291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.653 [2024-11-20 07:22:41.945484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.653 [2024-11-20 07:22:41.945503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.653 [2024-11-20 07:22:41.949141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.653 [2024-11-20 07:22:41.949324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.653 [2024-11-20 07:22:41.949343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.653 [2024-11-20 07:22:41.952912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.653 [2024-11-20 07:22:41.953116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.653 [2024-11-20 07:22:41.953134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.653 [2024-11-20 07:22:41.956758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.653 [2024-11-20 07:22:41.956956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.653 [2024-11-20 07:22:41.956974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.653 [2024-11-20 07:22:41.960586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.653 [2024-11-20 07:22:41.960774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.653 [2024-11-20 07:22:41.960792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.653 [2024-11-20 07:22:41.964652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.653 [2024-11-20 07:22:41.964838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.653 [2024-11-20 07:22:41.964856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.653 [2024-11-20 07:22:41.968674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.653 [2024-11-20 07:22:41.968885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.653 [2024-11-20 07:22:41.968911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.653 [2024-11-20 07:22:41.973120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.654 [2024-11-20 07:22:41.973308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.654 [2024-11-20 07:22:41.973326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.654 [2024-11-20 07:22:41.977708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.654 [2024-11-20 07:22:41.977849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.654 [2024-11-20 07:22:41.977868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.654 [2024-11-20 07:22:41.982599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.654 [2024-11-20 07:22:41.982749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.654 [2024-11-20 07:22:41.982767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.654 [2024-11-20 07:22:41.987246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.654 [2024-11-20 07:22:41.987380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.654 [2024-11-20 07:22:41.987397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.654 [2024-11-20 07:22:41.991373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.654 [2024-11-20 07:22:41.991497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.654 [2024-11-20 07:22:41.991515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.654 [2024-11-20 07:22:41.995226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.654 [2024-11-20 07:22:41.995390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.654 [2024-11-20 07:22:41.995407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.654 [2024-11-20 07:22:41.999117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.654 [2024-11-20 07:22:41.999254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.654 [2024-11-20 07:22:41.999273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.654 [2024-11-20 07:22:42.003159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.654 [2024-11-20 07:22:42.003330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.654 [2024-11-20 07:22:42.003348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.654 [2024-11-20 07:22:42.007127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.654 [2024-11-20 07:22:42.007287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.654 [2024-11-20 07:22:42.007305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.654 [2024-11-20 07:22:42.011067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.654 [2024-11-20 07:22:42.011235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.654 [2024-11-20 07:22:42.011253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.654 [2024-11-20 07:22:42.014931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.654 [2024-11-20 07:22:42.015119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.654 [2024-11-20 07:22:42.015138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.654 [2024-11-20 07:22:42.018823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.654 [2024-11-20 07:22:42.019003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.654 [2024-11-20 07:22:42.019021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.654 [2024-11-20 07:22:42.022742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.654 [2024-11-20 07:22:42.022869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.654 [2024-11-20 07:22:42.022886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.654 [2024-11-20 07:22:42.026836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.654 [2024-11-20 07:22:42.027010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.654 [2024-11-20 07:22:42.027028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.654 [2024-11-20 07:22:42.031459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.654 [2024-11-20 07:22:42.031611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.654 [2024-11-20 07:22:42.031630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.654 [2024-11-20 07:22:42.035836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.654 [2024-11-20 07:22:42.035998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.654 [2024-11-20 07:22:42.036021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.654 [2024-11-20 07:22:42.039740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.654 [2024-11-20 07:22:42.039913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.654 [2024-11-20 07:22:42.039931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.654 [2024-11-20 07:22:42.044138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.654 [2024-11-20 07:22:42.044299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.654 [2024-11-20 07:22:42.044317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.654 [2024-11-20 07:22:42.049093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.654 [2024-11-20 07:22:42.049250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.654 [2024-11-20 07:22:42.049268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.654 [2024-11-20 07:22:42.053392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.654 [2024-11-20 07:22:42.053504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.654 [2024-11-20 07:22:42.053521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.654 [2024-11-20 07:22:42.057268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.654 [2024-11-20 07:22:42.057362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.654 [2024-11-20 07:22:42.057379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.654 [2024-11-20 07:22:42.061083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.654 [2024-11-20 07:22:42.061208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.654 [2024-11-20 07:22:42.061225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.654 [2024-11-20 07:22:42.065074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.655 [2024-11-20 07:22:42.065188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.655 [2024-11-20 07:22:42.065205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.655 [2024-11-20 07:22:42.068997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.655 [2024-11-20 07:22:42.069109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.655 [2024-11-20 07:22:42.069127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.655 [2024-11-20 07:22:42.072927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.655 [2024-11-20 07:22:42.073058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.655 [2024-11-20 07:22:42.073076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.655 [2024-11-20 07:22:42.076878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.655 [2024-11-20 07:22:42.076994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.655 [2024-11-20 07:22:42.077012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.655 [2024-11-20 07:22:42.081184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.655 [2024-11-20 07:22:42.081337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.655 [2024-11-20 07:22:42.081354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.655 [2024-11-20 07:22:42.085650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.655 [2024-11-20 07:22:42.085739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.655 [2024-11-20 07:22:42.085757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.655 [2024-11-20 07:22:42.089760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.655 [2024-11-20 07:22:42.089869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.655 [2024-11-20 07:22:42.089886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.655 [2024-11-20 07:22:42.093658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.655 [2024-11-20 07:22:42.093761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.655 [2024-11-20 07:22:42.093780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.655 [2024-11-20 07:22:42.097586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.655 [2024-11-20 07:22:42.097711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.655 [2024-11-20 07:22:42.097729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.655 [2024-11-20 07:22:42.101581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.655 [2024-11-20 07:22:42.101710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.655 [2024-11-20 07:22:42.101728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.655 [2024-11-20 07:22:42.105759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.655 [2024-11-20 07:22:42.105914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.655 [2024-11-20 07:22:42.105932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.655 [2024-11-20 07:22:42.110297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.655 [2024-11-20 07:22:42.110428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.655 [2024-11-20 07:22:42.110446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.655 [2024-11-20 07:22:42.114339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.655 [2024-11-20 07:22:42.114475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.655 [2024-11-20 07:22:42.114493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.655 [2024-11-20 07:22:42.118078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.655 [2024-11-20 07:22:42.118244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.655 [2024-11-20 07:22:42.118262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.655 [2024-11-20 07:22:42.122432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.655 [2024-11-20 07:22:42.122615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.655 [2024-11-20 07:22:42.122634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.655 [2024-11-20 07:22:42.127537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.655 [2024-11-20 07:22:42.127670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.655 [2024-11-20 07:22:42.127688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.655 [2024-11-20 07:22:42.132148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.655 [2024-11-20 07:22:42.132275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.655 [2024-11-20 07:22:42.132293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.655 [2024-11-20 07:22:42.136160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.655 [2024-11-20 07:22:42.136302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.655 [2024-11-20 07:22:42.136320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.655 [2024-11-20 07:22:42.140055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.655 [2024-11-20 07:22:42.140210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.655 [2024-11-20 07:22:42.140228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.655 [2024-11-20 07:22:42.143961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.655 [2024-11-20 07:22:42.144111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.655 [2024-11-20 07:22:42.144132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.655 [2024-11-20 07:22:42.148038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.655 [2024-11-20 07:22:42.148181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.655 [2024-11-20 07:22:42.148198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.655 [2024-11-20 07:22:42.151960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.655 [2024-11-20 07:22:42.152123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.655 [2024-11-20 07:22:42.152142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.655 [2024-11-20 07:22:42.155938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.656 [2024-11-20 07:22:42.156111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.656 [2024-11-20 07:22:42.156130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.656 [2024-11-20 07:22:42.159908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.656 [2024-11-20 07:22:42.160085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.656 [2024-11-20 07:22:42.160103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.656 [2024-11-20 07:22:42.163889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.656 [2024-11-20 07:22:42.164065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.656 [2024-11-20 07:22:42.164084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.656 [2024-11-20 07:22:42.167837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.656 [2024-11-20 07:22:42.167993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.656 [2024-11-20 07:22:42.168012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.656 [2024-11-20 07:22:42.171713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.656 [2024-11-20 07:22:42.171829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.656 [2024-11-20 07:22:42.171846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.656 [2024-11-20 07:22:42.175703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.656 [2024-11-20 07:22:42.175858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.656 [2024-11-20 07:22:42.175876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.656 [2024-11-20 07:22:42.180737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.656 [2024-11-20 07:22:42.180960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.656 [2024-11-20 07:22:42.180978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.656 [2024-11-20 07:22:42.185606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.656 [2024-11-20 07:22:42.185753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.656 [2024-11-20 07:22:42.185770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.656 [2024-11-20 07:22:42.189826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.656 [2024-11-20 07:22:42.189969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.656 [2024-11-20 07:22:42.189987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.656 [2024-11-20 07:22:42.193986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.656 [2024-11-20 07:22:42.194117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.656 [2024-11-20 07:22:42.194136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.656 [2024-11-20 07:22:42.198324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.656 [2024-11-20 07:22:42.198437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.656 [2024-11-20 07:22:42.198457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.918 [2024-11-20 07:22:42.202484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.918 [2024-11-20 07:22:42.202620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.918 [2024-11-20 07:22:42.202641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.918 [2024-11-20 07:22:42.207275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.918 [2024-11-20 07:22:42.207449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.918 [2024-11-20 07:22:42.207469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.918 [2024-11-20 07:22:42.212310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.918 [2024-11-20 07:22:42.212458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.918 [2024-11-20 07:22:42.212477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.918 [2024-11-20 07:22:42.217031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.918 [2024-11-20 07:22:42.217187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.918 [2024-11-20 07:22:42.217206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.918 [2024-11-20 07:22:42.222006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.918 [2024-11-20 07:22:42.222224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.918 [2024-11-20 07:22:42.222244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.918 [2024-11-20 07:22:42.227079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.918 [2024-11-20 07:22:42.227304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.918 [2024-11-20 07:22:42.227324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.918 [2024-11-20 07:22:42.232161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.918 [2024-11-20 07:22:42.232389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.918 [2024-11-20 07:22:42.232409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.918 [2024-11-20 07:22:42.237387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.918 [2024-11-20 07:22:42.237598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.918 [2024-11-20 07:22:42.237617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.918 [2024-11-20 07:22:42.242434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.918 [2024-11-20 07:22:42.242585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.918 [2024-11-20 07:22:42.242603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.918 [2024-11-20 07:22:42.247892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.918 [2024-11-20 07:22:42.248097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.918 [2024-11-20 07:22:42.248115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.918 [2024-11-20 07:22:42.253054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.918 [2024-11-20 07:22:42.253247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.918 [2024-11-20 07:22:42.253265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.918 [2024-11-20 07:22:42.258165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.918 [2024-11-20 07:22:42.258316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.918 [2024-11-20 07:22:42.258334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.918 [2024-11-20 07:22:42.263329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.918 [2024-11-20 07:22:42.263540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.918 [2024-11-20 07:22:42.263563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.918 [2024-11-20 07:22:42.268480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.918 [2024-11-20 07:22:42.268725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.918 [2024-11-20 07:22:42.268745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.918 [2024-11-20 07:22:42.274132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.918 [2024-11-20 07:22:42.274376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.918 [2024-11-20 07:22:42.274395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.918 [2024-11-20 07:22:42.279655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.918 [2024-11-20 07:22:42.279801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.918 [2024-11-20 07:22:42.279820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.919 [2024-11-20 07:22:42.284219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.919 [2024-11-20 07:22:42.284377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.919 [2024-11-20 07:22:42.284394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.919 [2024-11-20 07:22:42.288148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.919 [2024-11-20 07:22:42.288351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.919 [2024-11-20 07:22:42.288369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.919 [2024-11-20 07:22:42.292349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.919 [2024-11-20 07:22:42.292547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.919 [2024-11-20 07:22:42.292565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.919 [2024-11-20 07:22:42.296543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.919 [2024-11-20 07:22:42.296713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.919 [2024-11-20 07:22:42.296731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.919 [2024-11-20 07:22:42.301055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.919 [2024-11-20 07:22:42.301233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.919 [2024-11-20 07:22:42.301251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.919 [2024-11-20 07:22:42.305144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.919 [2024-11-20 07:22:42.305292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.919 [2024-11-20 07:22:42.305310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.919 [2024-11-20 07:22:42.309181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.919 [2024-11-20 07:22:42.309341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.919 [2024-11-20 07:22:42.309359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.919 [2024-11-20 07:22:42.313257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.919 [2024-11-20 07:22:42.313392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.919 [2024-11-20 07:22:42.313410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.919 [2024-11-20 07:22:42.317211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.919 [2024-11-20 07:22:42.317399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.919 [2024-11-20 07:22:42.317417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.919 [2024-11-20 07:22:42.321083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.919 [2024-11-20 07:22:42.321237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.919 [2024-11-20 07:22:42.321255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.919 [2024-11-20 07:22:42.325184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.919 [2024-11-20 07:22:42.325357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.919 [2024-11-20 07:22:42.325375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.919 [2024-11-20 07:22:42.330307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.919 [2024-11-20 07:22:42.330470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.919 [2024-11-20 07:22:42.330489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.919 [2024-11-20 07:22:42.335395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.919 [2024-11-20 07:22:42.335566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.919 [2024-11-20 07:22:42.335585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.919 [2024-11-20 07:22:42.339972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.919 [2024-11-20 07:22:42.340099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.919 [2024-11-20 07:22:42.340118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.919 [2024-11-20 07:22:42.343966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.919 [2024-11-20 07:22:42.344109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.919 [2024-11-20 07:22:42.344127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.919 [2024-11-20 07:22:42.348073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.919 [2024-11-20 07:22:42.348262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.919 [2024-11-20 07:22:42.348280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.919 [2024-11-20 07:22:42.352804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.919 [2024-11-20 07:22:42.352928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.919 [2024-11-20 07:22:42.352953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.919 [2024-11-20 07:22:42.357922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.919 [2024-11-20 07:22:42.358108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.919 [2024-11-20 07:22:42.358126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.919 [2024-11-20 07:22:42.362317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.919 [2024-11-20 07:22:42.362480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.919 [2024-11-20 07:22:42.362497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.919 [2024-11-20 07:22:42.366386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.919 [2024-11-20 07:22:42.366555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.919 [2024-11-20 07:22:42.366573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.919 [2024-11-20 07:22:42.370784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.919 [2024-11-20 07:22:42.370958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.919 [2024-11-20 07:22:42.370976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.919 [2024-11-20 07:22:42.375708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.919 [2024-11-20 07:22:42.375862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.920 [2024-11-20 07:22:42.375879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.920 [2024-11-20 07:22:42.380761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.920 [2024-11-20 07:22:42.380871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.920 [2024-11-20 07:22:42.380893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.920 [2024-11-20 07:22:42.385366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.920 [2024-11-20 07:22:42.385455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.920 [2024-11-20 07:22:42.385473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.920 [2024-11-20 07:22:42.389574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.920 [2024-11-20 07:22:42.389690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.920 [2024-11-20 07:22:42.389708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.920 [2024-11-20 07:22:42.393739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.920 [2024-11-20 07:22:42.393878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.920 [2024-11-20 07:22:42.393896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.920 [2024-11-20 07:22:42.397944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.920 [2024-11-20 07:22:42.398060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.920 [2024-11-20 07:22:42.398078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.920 [2024-11-20 07:22:42.402512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.920 [2024-11-20 07:22:42.402638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.920 [2024-11-20 07:22:42.402656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.920 [2024-11-20 07:22:42.406926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.920 [2024-11-20 07:22:42.407070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.920 [2024-11-20 07:22:42.407089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.920 [2024-11-20 07:22:42.411546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.920 [2024-11-20 07:22:42.411693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.920 [2024-11-20 07:22:42.411711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.920 [2024-11-20 07:22:42.415772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.920 [2024-11-20 07:22:42.415916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.920 [2024-11-20 07:22:42.415933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.920 [2024-11-20 07:22:42.419768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.920 [2024-11-20 07:22:42.419905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.920 [2024-11-20 07:22:42.419930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.920 [2024-11-20 07:22:42.423633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.920 [2024-11-20 07:22:42.423770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.920 [2024-11-20 07:22:42.423787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.920 [2024-11-20 07:22:42.427617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.920 [2024-11-20 07:22:42.427755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.920 [2024-11-20 07:22:42.427773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.920 [2024-11-20 07:22:42.431589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.920 [2024-11-20 07:22:42.431716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.920 [2024-11-20 07:22:42.431734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.920 [2024-11-20 07:22:42.435531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.920 [2024-11-20 07:22:42.435666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.920 [2024-11-20 07:22:42.435684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:37.920 [2024-11-20 07:22:42.439560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.920 [2024-11-20 07:22:42.439684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.920 [2024-11-20 07:22:42.439701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:37.920 [2024-11-20 07:22:42.443463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.920 [2024-11-20 07:22:42.443589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.920 [2024-11-20 07:22:42.443607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:37.920 6889.50 IOPS, 861.19 MiB/s [2024-11-20T06:22:42.476Z] [2024-11-20 07:22:42.448393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7bb4c0) with pdu=0x200016eff3c8 00:26:37.920 [2024-11-20 07:22:42.448481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.920 [2024-11-20 07:22:42.448500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:37.920 00:26:37.920 Latency(us) 00:26:37.920 [2024-11-20T06:22:42.476Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:37.920 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:37.920 nvme0n1 : 2.00 6886.79 860.85 0.00 0.00 2318.92 1731.01 8149.26 00:26:37.920 [2024-11-20T06:22:42.476Z] =================================================================================================================== 00:26:37.920 [2024-11-20T06:22:42.476Z] Total : 6886.79 860.85 0.00 0.00 2318.92 1731.01 8149.26 00:26:37.920 { 00:26:37.920 "results": [ 00:26:37.920 { 00:26:37.920 "job": "nvme0n1", 00:26:37.920 "core_mask": "0x2", 00:26:37.920 "workload": "randwrite", 00:26:37.920 "status": "finished", 00:26:37.920 "queue_depth": 16, 00:26:37.920 "io_size": 131072, 00:26:37.920 "runtime": 2.003692, 00:26:37.920 "iops": 6886.786991214219, 00:26:37.920 "mibps": 860.8483739017773, 00:26:37.920 "io_failed": 0, 00:26:37.920 "io_timeout": 0, 00:26:37.920 "avg_latency_us": 2318.923703734045, 00:26:37.920 "min_latency_us": 1731.0052173913043, 00:26:37.920 "max_latency_us": 8149.2591304347825 00:26:37.920 } 00:26:37.920 ], 00:26:37.920 "core_count": 1 00:26:37.920 } 00:26:38.179 07:22:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:38.179 07:22:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:38.179 07:22:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:38.179 | .driver_specific 00:26:38.179 | .nvme_error 00:26:38.179 | .status_code 00:26:38.179 | .command_transient_transport_error' 00:26:38.179 07:22:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:38.179 07:22:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 446 > 0 )) 00:26:38.179 07:22:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1340522 00:26:38.179 07:22:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 1340522 ']' 00:26:38.179 07:22:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 1340522 00:26:38.179 07:22:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:26:38.179 07:22:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:38.179 07:22:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1340522 00:26:38.438 07:22:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:38.438 07:22:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:38.438 07:22:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1340522' 00:26:38.438 killing process with pid 1340522 00:26:38.438 07:22:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 1340522 00:26:38.438 Received shutdown signal, test time was about 2.000000 seconds 00:26:38.438 00:26:38.438 Latency(us) 00:26:38.438 [2024-11-20T06:22:42.994Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.438 [2024-11-20T06:22:42.994Z] =================================================================================================================== 00:26:38.438 [2024-11-20T06:22:42.994Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:38.438 07:22:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 1340522 00:26:38.438 07:22:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1338861 00:26:38.438 07:22:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 1338861 ']' 00:26:38.438 07:22:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 1338861 00:26:38.439 07:22:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:26:38.439 07:22:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:38.439 07:22:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1338861 00:26:38.439 07:22:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:38.439 07:22:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:38.439 07:22:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1338861' 00:26:38.439 killing process with pid 1338861 00:26:38.439 07:22:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 1338861 00:26:38.439 07:22:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 1338861 00:26:38.698 00:26:38.698 real 0m14.058s 00:26:38.698 user 0m26.818s 00:26:38.698 sys 0m4.696s 00:26:38.698 07:22:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:38.698 07:22:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:38.698 ************************************ 00:26:38.698 END TEST nvmf_digest_error 00:26:38.698 ************************************ 00:26:38.698 07:22:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:38.698 07:22:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:38.698 07:22:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:38.698 07:22:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:26:38.698 07:22:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:38.698 07:22:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:26:38.698 07:22:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:38.698 07:22:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:38.698 rmmod nvme_tcp 00:26:38.698 rmmod nvme_fabrics 00:26:38.698 rmmod nvme_keyring 00:26:38.698 07:22:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:38.698 07:22:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:26:38.698 07:22:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:26:38.698 07:22:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 1338861 ']' 00:26:38.698 07:22:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 1338861 00:26:38.698 07:22:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 1338861 ']' 00:26:38.698 07:22:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 1338861 00:26:38.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (1338861) - No such process 00:26:38.698 07:22:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 1338861 is not found' 00:26:38.698 Process with pid 1338861 is not found 00:26:38.698 07:22:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:38.698 07:22:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:38.699 07:22:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:38.699 07:22:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:26:38.699 07:22:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:26:38.699 07:22:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:38.699 07:22:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:26:38.699 07:22:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:38.699 07:22:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:38.699 07:22:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.699 07:22:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:38.699 07:22:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:41.234 00:26:41.234 real 0m36.883s 00:26:41.234 user 0m55.766s 00:26:41.234 sys 0m13.892s 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:41.234 ************************************ 00:26:41.234 END TEST nvmf_digest 00:26:41.234 ************************************ 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.234 ************************************ 00:26:41.234 START TEST nvmf_bdevperf 00:26:41.234 ************************************ 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:41.234 * Looking for test storage... 00:26:41.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:41.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.234 --rc genhtml_branch_coverage=1 00:26:41.234 --rc genhtml_function_coverage=1 00:26:41.234 --rc genhtml_legend=1 00:26:41.234 --rc geninfo_all_blocks=1 00:26:41.234 --rc geninfo_unexecuted_blocks=1 00:26:41.234 00:26:41.234 ' 00:26:41.234 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:41.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.234 --rc genhtml_branch_coverage=1 00:26:41.234 --rc genhtml_function_coverage=1 00:26:41.234 --rc genhtml_legend=1 00:26:41.234 --rc geninfo_all_blocks=1 00:26:41.234 --rc geninfo_unexecuted_blocks=1 00:26:41.234 00:26:41.234 ' 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:41.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.235 --rc genhtml_branch_coverage=1 00:26:41.235 --rc genhtml_function_coverage=1 00:26:41.235 --rc genhtml_legend=1 00:26:41.235 --rc geninfo_all_blocks=1 00:26:41.235 --rc geninfo_unexecuted_blocks=1 00:26:41.235 00:26:41.235 ' 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:41.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.235 --rc genhtml_branch_coverage=1 00:26:41.235 --rc genhtml_function_coverage=1 00:26:41.235 --rc genhtml_legend=1 00:26:41.235 --rc geninfo_all_blocks=1 00:26:41.235 --rc geninfo_unexecuted_blocks=1 00:26:41.235 00:26:41.235 ' 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:41.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:41.235 07:22:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:47.802 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:47.802 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:47.802 Found net devices under 0000:86:00.0: cvl_0_0 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:47.802 Found net devices under 0000:86:00.1: cvl_0_1 00:26:47.802 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:47.803 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:47.803 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.427 ms 00:26:47.803 00:26:47.803 --- 10.0.0.2 ping statistics --- 00:26:47.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:47.803 rtt min/avg/max/mdev = 0.427/0.427/0.427/0.000 ms 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:47.803 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:47.803 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:26:47.803 00:26:47.803 --- 10.0.0.1 ping statistics --- 00:26:47.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:47.803 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1344537 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1344537 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 1344537 ']' 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:47.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:47.803 [2024-11-20 07:22:51.515546] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:26:47.803 [2024-11-20 07:22:51.515592] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:47.803 [2024-11-20 07:22:51.594777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:47.803 [2024-11-20 07:22:51.637528] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:47.803 [2024-11-20 07:22:51.637563] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:47.803 [2024-11-20 07:22:51.637571] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:47.803 [2024-11-20 07:22:51.637577] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:47.803 [2024-11-20 07:22:51.637582] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:47.803 [2024-11-20 07:22:51.639025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:47.803 [2024-11-20 07:22:51.639130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:47.803 [2024-11-20 07:22:51.639131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:47.803 [2024-11-20 07:22:51.775957] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:47.803 Malloc0 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:47.803 [2024-11-20 07:22:51.843512] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:47.803 { 00:26:47.803 "params": { 00:26:47.803 "name": "Nvme$subsystem", 00:26:47.803 "trtype": "$TEST_TRANSPORT", 00:26:47.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:47.803 "adrfam": "ipv4", 00:26:47.803 "trsvcid": "$NVMF_PORT", 00:26:47.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:47.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:47.803 "hdgst": ${hdgst:-false}, 00:26:47.803 "ddgst": ${ddgst:-false} 00:26:47.803 }, 00:26:47.803 "method": "bdev_nvme_attach_controller" 00:26:47.803 } 00:26:47.803 EOF 00:26:47.803 )") 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:47.803 07:22:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:47.803 "params": { 00:26:47.803 "name": "Nvme1", 00:26:47.803 "trtype": "tcp", 00:26:47.803 "traddr": "10.0.0.2", 00:26:47.803 "adrfam": "ipv4", 00:26:47.803 "trsvcid": "4420", 00:26:47.803 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:47.803 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:47.803 "hdgst": false, 00:26:47.803 "ddgst": false 00:26:47.803 }, 00:26:47.803 "method": "bdev_nvme_attach_controller" 00:26:47.803 }' 00:26:47.803 [2024-11-20 07:22:51.896723] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:26:47.803 [2024-11-20 07:22:51.896778] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1344724 ] 00:26:47.803 [2024-11-20 07:22:51.973845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.803 [2024-11-20 07:22:52.015393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:47.803 Running I/O for 1 seconds... 00:26:48.780 10814.00 IOPS, 42.24 MiB/s 00:26:48.780 Latency(us) 00:26:48.780 [2024-11-20T06:22:53.336Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:48.780 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:48.780 Verification LBA range: start 0x0 length 0x4000 00:26:48.780 Nvme1n1 : 1.01 10843.72 42.36 0.00 0.00 11755.56 1346.34 16070.57 00:26:48.780 [2024-11-20T06:22:53.336Z] =================================================================================================================== 00:26:48.780 [2024-11-20T06:22:53.336Z] Total : 10843.72 42.36 0.00 0.00 11755.56 1346.34 16070.57 00:26:49.049 07:22:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1345000 00:26:49.049 07:22:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:49.049 07:22:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:49.049 07:22:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:49.049 07:22:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:49.049 07:22:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:49.049 07:22:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:49.049 07:22:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:49.049 { 00:26:49.049 "params": { 00:26:49.049 "name": "Nvme$subsystem", 00:26:49.049 "trtype": "$TEST_TRANSPORT", 00:26:49.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:49.049 "adrfam": "ipv4", 00:26:49.049 "trsvcid": "$NVMF_PORT", 00:26:49.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:49.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:49.049 "hdgst": ${hdgst:-false}, 00:26:49.049 "ddgst": ${ddgst:-false} 00:26:49.049 }, 00:26:49.049 "method": "bdev_nvme_attach_controller" 00:26:49.049 } 00:26:49.049 EOF 00:26:49.049 )") 00:26:49.049 07:22:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:49.049 07:22:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:49.049 07:22:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:49.049 07:22:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:49.049 "params": { 00:26:49.049 "name": "Nvme1", 00:26:49.049 "trtype": "tcp", 00:26:49.050 "traddr": "10.0.0.2", 00:26:49.050 "adrfam": "ipv4", 00:26:49.050 "trsvcid": "4420", 00:26:49.050 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:49.050 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:49.050 "hdgst": false, 00:26:49.050 "ddgst": false 00:26:49.050 }, 00:26:49.050 "method": "bdev_nvme_attach_controller" 00:26:49.050 }' 00:26:49.050 [2024-11-20 07:22:53.392330] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:26:49.050 [2024-11-20 07:22:53.392378] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1345000 ] 00:26:49.050 [2024-11-20 07:22:53.465644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.050 [2024-11-20 07:22:53.506870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:49.307 Running I/O for 15 seconds... 00:26:51.183 11074.00 IOPS, 43.26 MiB/s [2024-11-20T06:22:56.679Z] 11101.00 IOPS, 43.36 MiB/s [2024-11-20T06:22:56.679Z] 07:22:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1344537 00:26:52.123 07:22:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:52.123 [2024-11-20 07:22:56.362451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:107568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.123 [2024-11-20 07:22:56.362486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.123 [2024-11-20 07:22:56.362503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:107576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.123 [2024-11-20 07:22:56.362513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.123 [2024-11-20 07:22:56.362523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:107584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.123 [2024-11-20 07:22:56.362531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.123 [2024-11-20 07:22:56.362540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:107592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.123 [2024-11-20 07:22:56.362548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.123 [2024-11-20 07:22:56.362561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:107600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.123 [2024-11-20 07:22:56.362568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.123 [2024-11-20 07:22:56.362576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:107608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.123 [2024-11-20 07:22:56.362584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.123 [2024-11-20 07:22:56.362594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:107616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.123 [2024-11-20 07:22:56.362605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.123 [2024-11-20 07:22:56.362614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:107624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.123 [2024-11-20 07:22:56.362621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.123 [2024-11-20 07:22:56.362630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:108480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.123 [2024-11-20 07:22:56.362637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.123 [2024-11-20 07:22:56.362646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:108488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.123 [2024-11-20 07:22:56.362653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.123 [2024-11-20 07:22:56.362661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:108496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.123 [2024-11-20 07:22:56.362668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.123 [2024-11-20 07:22:56.362676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:108504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.123 [2024-11-20 07:22:56.362685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.123 [2024-11-20 07:22:56.362694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:108512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.123 [2024-11-20 07:22:56.362700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.124 [2024-11-20 07:22:56.362709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:108520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.124 [2024-11-20 07:22:56.362717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.124 [2024-11-20 07:22:56.362726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:107632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.124 [2024-11-20 07:22:56.362734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.124 [2024-11-20 07:22:56.362742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:107640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.124 [2024-11-20 07:22:56.362750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.124 [2024-11-20 07:22:56.362759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:107648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.124 [2024-11-20 07:22:56.362769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.124 [2024-11-20 07:22:56.362779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:107656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.124 [2024-11-20 07:22:56.362788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.124 [2024-11-20 07:22:56.362798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:107664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.124 [2024-11-20 07:22:56.362808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.124 [2024-11-20 07:22:56.362817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:107672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.124 [2024-11-20 07:22:56.362826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.124 [2024-11-20 07:22:56.362836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:107680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.124 [2024-11-20 07:22:56.362842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.124 [2024-11-20 07:22:56.362851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:107688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.124 [2024-11-20 07:22:56.362857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.124 [2024-11-20 07:22:56.362865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:107696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.124 [2024-11-20 07:22:56.362872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.124 [2024-11-20 07:22:56.362880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:107704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.124 [2024-11-20 07:22:56.362887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.124 [2024-11-20 07:22:56.362895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.124 [2024-11-20 07:22:56.362902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.124 [2024-11-20 07:22:56.362910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:107720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.124 [2024-11-20 07:22:56.362917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.124 [2024-11-20 07:22:56.362926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:107728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.124 [2024-11-20 07:22:56.362932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.124 [2024-11-20 07:22:56.362941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:107736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.124 [2024-11-20 07:22:56.363051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.124 [2024-11-20 07:22:56.363061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:107744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.124 [2024-11-20 07:22:56.363068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.124 [2024-11-20 07:22:56.363077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:107752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.124 [2024-11-20 07:22:56.363087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.124 [2024-11-20 07:22:56.363097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:107760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.124 [2024-11-20 07:22:56.363103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.124 [2024-11-20 07:22:56.363112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:107768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.124 [2024-11-20 07:22:56.363118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.124 [2024-11-20 07:22:56.363126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:107776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.124 [2024-11-20 07:22:56.363133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.124 [2024-11-20 07:22:56.363141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:108528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.124 [2024-11-20 07:22:56.363149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.124 [2024-11-20 07:22:56.363157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:108536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.124 [2024-11-20 07:22:56.363164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.124 [2024-11-20 07:22:56.363172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:107784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.124 [2024-11-20 07:22:56.363179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.124 [2024-11-20 07:22:56.363187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:107792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.124 [2024-11-20 07:22:56.363194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.124 [2024-11-20 07:22:56.363202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:107800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.124 [2024-11-20 07:22:56.363208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.124 [2024-11-20 07:22:56.363216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:107808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.124 [2024-11-20 07:22:56.363223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.124 [2024-11-20 07:22:56.363231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:107816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.124 [2024-11-20 07:22:56.363238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.124 [2024-11-20 07:22:56.363246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:107824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.124 [2024-11-20 07:22:56.363252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.124 [2024-11-20 07:22:56.363261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.124 [2024-11-20 07:22:56.363267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.124 [2024-11-20 07:22:56.363277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.124 [2024-11-20 07:22:56.363284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.124 [2024-11-20 07:22:56.363292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:107848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.124 [2024-11-20 07:22:56.363299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.124 [2024-11-20 07:22:56.363307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:107856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.124 [2024-11-20 07:22:56.363313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.125 [2024-11-20 07:22:56.363321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:107864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.125 [2024-11-20 07:22:56.363328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.125 [2024-11-20 07:22:56.363336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:107872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.125 [2024-11-20 07:22:56.363344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.125 [2024-11-20 07:22:56.363352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:107880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.125 [2024-11-20 07:22:56.363358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.125 [2024-11-20 07:22:56.363367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:107888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.125 [2024-11-20 07:22:56.363373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.125 [2024-11-20 07:22:56.363381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:107896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.125 [2024-11-20 07:22:56.363388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.125 [2024-11-20 07:22:56.363397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:107904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.125 [2024-11-20 07:22:56.363403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.125 [2024-11-20 07:22:56.363411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:107912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.125 [2024-11-20 07:22:56.363418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.125 [2024-11-20 07:22:56.363426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:107920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.125 [2024-11-20 07:22:56.363432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.125 [2024-11-20 07:22:56.363440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.125 [2024-11-20 07:22:56.363447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.125 [2024-11-20 07:22:56.363456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:107936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.125 [2024-11-20 07:22:56.363463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.125 [2024-11-20 07:22:56.363481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:107944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.125 [2024-11-20 07:22:56.363487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.125 [2024-11-20 07:22:56.363495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:107952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.125 [2024-11-20 07:22:56.363502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.125 [2024-11-20 07:22:56.363510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.125 [2024-11-20 07:22:56.363517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.125 [2024-11-20 07:22:56.363525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:108544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.125 [2024-11-20 07:22:56.363531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.125 [2024-11-20 07:22:56.363539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:108552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.125 [2024-11-20 07:22:56.363546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.125 [2024-11-20 07:22:56.363554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:108560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.125 [2024-11-20 07:22:56.363561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.125 [2024-11-20 07:22:56.363569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:108568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.125 [2024-11-20 07:22:56.363575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.125 [2024-11-20 07:22:56.363583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.125 [2024-11-20 07:22:56.363589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.125 [2024-11-20 07:22:56.363597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:108584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:52.125 [2024-11-20 07:22:56.363604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.125 [2024-11-20 07:22:56.363612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:107968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.125 [2024-11-20 07:22:56.363619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.125 [2024-11-20 07:22:56.363628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:107976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.125 [2024-11-20 07:22:56.363635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.125 [2024-11-20 07:22:56.363643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:107984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.125 [2024-11-20 07:22:56.363649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.125 [2024-11-20 07:22:56.363659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:107992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.125 [2024-11-20 07:22:56.363665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.125 [2024-11-20 07:22:56.363674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:108000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.125 [2024-11-20 07:22:56.363680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.125 [2024-11-20 07:22:56.363688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:108008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.125 [2024-11-20 07:22:56.363695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.125 [2024-11-20 07:22:56.363703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:108016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.125 [2024-11-20 07:22:56.363709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.125 [2024-11-20 07:22:56.363718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:108024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.125 [2024-11-20 07:22:56.363725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.125 [2024-11-20 07:22:56.363733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.125 [2024-11-20 07:22:56.363739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.125 [2024-11-20 07:22:56.363747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:108040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.125 [2024-11-20 07:22:56.363754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.126 [2024-11-20 07:22:56.363763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:108048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.126 [2024-11-20 07:22:56.363769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.126 [2024-11-20 07:22:56.363777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:108056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.126 [2024-11-20 07:22:56.363783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.126 [2024-11-20 07:22:56.363791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.126 [2024-11-20 07:22:56.363798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.126 [2024-11-20 07:22:56.363806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:108072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.126 [2024-11-20 07:22:56.363812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.126 [2024-11-20 07:22:56.363820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:108080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.126 [2024-11-20 07:22:56.363827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.126 [2024-11-20 07:22:56.363835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:108088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.126 [2024-11-20 07:22:56.363843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.126 [2024-11-20 07:22:56.363851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:108096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.126 [2024-11-20 07:22:56.363858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.126 [2024-11-20 07:22:56.363866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:108104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.126 [2024-11-20 07:22:56.363873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.126 [2024-11-20 07:22:56.363881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:108112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.126 [2024-11-20 07:22:56.363888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.126 [2024-11-20 07:22:56.363896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:108120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.126 [2024-11-20 07:22:56.363902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.126 [2024-11-20 07:22:56.363910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:108128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.126 [2024-11-20 07:22:56.363917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.126 [2024-11-20 07:22:56.363925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.126 [2024-11-20 07:22:56.363932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.126 [2024-11-20 07:22:56.363940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:108144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.126 [2024-11-20 07:22:56.363950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.126 [2024-11-20 07:22:56.363960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:108152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.126 [2024-11-20 07:22:56.363966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.126 [2024-11-20 07:22:56.363974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.126 [2024-11-20 07:22:56.363981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.126 [2024-11-20 07:22:56.363989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:108168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.126 [2024-11-20 07:22:56.363995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.126 [2024-11-20 07:22:56.364003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:108176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.126 [2024-11-20 07:22:56.364010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.126 [2024-11-20 07:22:56.364018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:108184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.126 [2024-11-20 07:22:56.364025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.126 [2024-11-20 07:22:56.364035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:108192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.126 [2024-11-20 07:22:56.364042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.126 [2024-11-20 07:22:56.364050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:108200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.126 [2024-11-20 07:22:56.364056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.126 [2024-11-20 07:22:56.364064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:108208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.126 [2024-11-20 07:22:56.364071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.126 [2024-11-20 07:22:56.364079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:108216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.126 [2024-11-20 07:22:56.364086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.126 [2024-11-20 07:22:56.364094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:108224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.126 [2024-11-20 07:22:56.364100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.126 [2024-11-20 07:22:56.364109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:108232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.126 [2024-11-20 07:22:56.364116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.126 [2024-11-20 07:22:56.364124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:108240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.126 [2024-11-20 07:22:56.364130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.126 [2024-11-20 07:22:56.364139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:108248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.126 [2024-11-20 07:22:56.364145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.126 [2024-11-20 07:22:56.364153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:108256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.126 [2024-11-20 07:22:56.364160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.126 [2024-11-20 07:22:56.364168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.126 [2024-11-20 07:22:56.364174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.126 [2024-11-20 07:22:56.364182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:108272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.126 [2024-11-20 07:22:56.364189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.126 [2024-11-20 07:22:56.364198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:108280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.126 [2024-11-20 07:22:56.364205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.126 [2024-11-20 07:22:56.364213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:108288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.126 [2024-11-20 07:22:56.364223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.126 [2024-11-20 07:22:56.364232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:108296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.126 [2024-11-20 07:22:56.364239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.127 [2024-11-20 07:22:56.364247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:108304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.127 [2024-11-20 07:22:56.364253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.127 [2024-11-20 07:22:56.364262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:108312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.127 [2024-11-20 07:22:56.364268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.127 [2024-11-20 07:22:56.364276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:108320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.127 [2024-11-20 07:22:56.364283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.127 [2024-11-20 07:22:56.364291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:108328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.127 [2024-11-20 07:22:56.364298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.127 [2024-11-20 07:22:56.364306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:108336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.127 [2024-11-20 07:22:56.364312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.127 [2024-11-20 07:22:56.364320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:108344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.127 [2024-11-20 07:22:56.364327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.127 [2024-11-20 07:22:56.364335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:108352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.127 [2024-11-20 07:22:56.364341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.127 [2024-11-20 07:22:56.364350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:108360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.127 [2024-11-20 07:22:56.364357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.127 [2024-11-20 07:22:56.364365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:108368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.127 [2024-11-20 07:22:56.364371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.127 [2024-11-20 07:22:56.364380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:108376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.127 [2024-11-20 07:22:56.364386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.127 [2024-11-20 07:22:56.364394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:108384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.127 [2024-11-20 07:22:56.364401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.127 [2024-11-20 07:22:56.364411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:108392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.127 [2024-11-20 07:22:56.364418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.127 [2024-11-20 07:22:56.364426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:108400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.127 [2024-11-20 07:22:56.364432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.127 [2024-11-20 07:22:56.364441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:108408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.127 [2024-11-20 07:22:56.364448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.127 [2024-11-20 07:22:56.364456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:108416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.127 [2024-11-20 07:22:56.364463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.127 [2024-11-20 07:22:56.364471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.127 [2024-11-20 07:22:56.364478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.127 [2024-11-20 07:22:56.364486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:108432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.127 [2024-11-20 07:22:56.364492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.127 [2024-11-20 07:22:56.364501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:108440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.127 [2024-11-20 07:22:56.364507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.127 [2024-11-20 07:22:56.364515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:108448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.127 [2024-11-20 07:22:56.364522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.127 [2024-11-20 07:22:56.364530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:108456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.127 [2024-11-20 07:22:56.364536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.127 [2024-11-20 07:22:56.364545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:52.127 [2024-11-20 07:22:56.364551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.127 [2024-11-20 07:22:56.364559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e2ba0 is same with the state(6) to be set 00:26:52.127 [2024-11-20 07:22:56.364567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:52.127 [2024-11-20 07:22:56.364572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:52.127 [2024-11-20 07:22:56.364578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108472 len:8 PRP1 0x0 PRP2 0x0 00:26:52.127 [2024-11-20 07:22:56.364586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.127 [2024-11-20 07:22:56.367497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.127 [2024-11-20 07:22:56.367553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.127 [2024-11-20 07:22:56.368159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.127 [2024-11-20 07:22:56.368177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.127 [2024-11-20 07:22:56.368186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.127 [2024-11-20 07:22:56.368364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.127 [2024-11-20 07:22:56.368543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.127 [2024-11-20 07:22:56.368551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.127 [2024-11-20 07:22:56.368558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.127 [2024-11-20 07:22:56.368566] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.127 [2024-11-20 07:22:56.380758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.127 [2024-11-20 07:22:56.381192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.127 [2024-11-20 07:22:56.381241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.127 [2024-11-20 07:22:56.381266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.128 [2024-11-20 07:22:56.381847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.128 [2024-11-20 07:22:56.382378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.128 [2024-11-20 07:22:56.382387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.128 [2024-11-20 07:22:56.382394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.128 [2024-11-20 07:22:56.382401] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.128 [2024-11-20 07:22:56.393668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.128 [2024-11-20 07:22:56.394122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.128 [2024-11-20 07:22:56.394139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.128 [2024-11-20 07:22:56.394147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.128 [2024-11-20 07:22:56.394319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.128 [2024-11-20 07:22:56.394491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.128 [2024-11-20 07:22:56.394500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.128 [2024-11-20 07:22:56.394506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.128 [2024-11-20 07:22:56.394512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.128 [2024-11-20 07:22:56.406668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.128 [2024-11-20 07:22:56.407091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.128 [2024-11-20 07:22:56.407108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.128 [2024-11-20 07:22:56.407119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.128 [2024-11-20 07:22:56.407291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.128 [2024-11-20 07:22:56.407464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.128 [2024-11-20 07:22:56.407473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.128 [2024-11-20 07:22:56.407479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.128 [2024-11-20 07:22:56.407485] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.128 [2024-11-20 07:22:56.419507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.128 [2024-11-20 07:22:56.419905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.128 [2024-11-20 07:22:56.419922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.128 [2024-11-20 07:22:56.419929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.128 [2024-11-20 07:22:56.420121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.128 [2024-11-20 07:22:56.420293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.128 [2024-11-20 07:22:56.420302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.128 [2024-11-20 07:22:56.420308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.128 [2024-11-20 07:22:56.420314] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.128 [2024-11-20 07:22:56.432305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.128 [2024-11-20 07:22:56.432760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.128 [2024-11-20 07:22:56.432805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.128 [2024-11-20 07:22:56.432828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.128 [2024-11-20 07:22:56.433329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.128 [2024-11-20 07:22:56.433502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.128 [2024-11-20 07:22:56.433510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.128 [2024-11-20 07:22:56.433516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.128 [2024-11-20 07:22:56.433522] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.128 [2024-11-20 07:22:56.445205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.128 [2024-11-20 07:22:56.445615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.128 [2024-11-20 07:22:56.445661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.128 [2024-11-20 07:22:56.445684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.128 [2024-11-20 07:22:56.446279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.128 [2024-11-20 07:22:56.446479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.128 [2024-11-20 07:22:56.446488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.128 [2024-11-20 07:22:56.446494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.128 [2024-11-20 07:22:56.446500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.128 [2024-11-20 07:22:56.458125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.128 [2024-11-20 07:22:56.458496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.128 [2024-11-20 07:22:56.458513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.128 [2024-11-20 07:22:56.458519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.128 [2024-11-20 07:22:56.458682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.128 [2024-11-20 07:22:56.458845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.128 [2024-11-20 07:22:56.458853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.128 [2024-11-20 07:22:56.458859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.128 [2024-11-20 07:22:56.458865] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.128 [2024-11-20 07:22:56.471039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.129 [2024-11-20 07:22:56.471464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.129 [2024-11-20 07:22:56.471481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.129 [2024-11-20 07:22:56.471488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.129 [2024-11-20 07:22:56.471661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.129 [2024-11-20 07:22:56.471837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.129 [2024-11-20 07:22:56.471845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.129 [2024-11-20 07:22:56.471852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.129 [2024-11-20 07:22:56.471859] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.129 [2024-11-20 07:22:56.483853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.129 [2024-11-20 07:22:56.484291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.129 [2024-11-20 07:22:56.484336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.129 [2024-11-20 07:22:56.484359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.129 [2024-11-20 07:22:56.484835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.129 [2024-11-20 07:22:56.485014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.129 [2024-11-20 07:22:56.485023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.129 [2024-11-20 07:22:56.485032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.129 [2024-11-20 07:22:56.485039] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.129 [2024-11-20 07:22:56.496908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.129 [2024-11-20 07:22:56.497355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.129 [2024-11-20 07:22:56.497373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.129 [2024-11-20 07:22:56.497380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.129 [2024-11-20 07:22:56.497543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.129 [2024-11-20 07:22:56.497706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.129 [2024-11-20 07:22:56.497714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.129 [2024-11-20 07:22:56.497720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.129 [2024-11-20 07:22:56.497726] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.129 [2024-11-20 07:22:56.509746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.129 [2024-11-20 07:22:56.510165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.129 [2024-11-20 07:22:56.510184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.129 [2024-11-20 07:22:56.510192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.129 [2024-11-20 07:22:56.510365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.129 [2024-11-20 07:22:56.510538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.129 [2024-11-20 07:22:56.510547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.129 [2024-11-20 07:22:56.510553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.129 [2024-11-20 07:22:56.510560] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.129 [2024-11-20 07:22:56.522592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.129 [2024-11-20 07:22:56.523017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.129 [2024-11-20 07:22:56.523034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.129 [2024-11-20 07:22:56.523042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.129 [2024-11-20 07:22:56.523215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.129 [2024-11-20 07:22:56.523389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.129 [2024-11-20 07:22:56.523397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.129 [2024-11-20 07:22:56.523404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.129 [2024-11-20 07:22:56.523410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.129 [2024-11-20 07:22:56.535544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.129 [2024-11-20 07:22:56.535982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.129 [2024-11-20 07:22:56.535999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.129 [2024-11-20 07:22:56.536006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.129 [2024-11-20 07:22:56.536187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.129 [2024-11-20 07:22:56.536350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.129 [2024-11-20 07:22:56.536358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.129 [2024-11-20 07:22:56.536364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.129 [2024-11-20 07:22:56.536370] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.129 [2024-11-20 07:22:56.548420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.129 [2024-11-20 07:22:56.548851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.129 [2024-11-20 07:22:56.548883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.129 [2024-11-20 07:22:56.548905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.129 [2024-11-20 07:22:56.549468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.129 [2024-11-20 07:22:56.549642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.129 [2024-11-20 07:22:56.549651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.129 [2024-11-20 07:22:56.549658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.130 [2024-11-20 07:22:56.549665] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.130 [2024-11-20 07:22:56.561388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.130 [2024-11-20 07:22:56.561837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.130 [2024-11-20 07:22:56.561882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.130 [2024-11-20 07:22:56.561905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.130 [2024-11-20 07:22:56.562503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.130 [2024-11-20 07:22:56.562771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.130 [2024-11-20 07:22:56.562779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.130 [2024-11-20 07:22:56.562786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.130 [2024-11-20 07:22:56.562793] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.130 [2024-11-20 07:22:56.574469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.130 [2024-11-20 07:22:56.574813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.130 [2024-11-20 07:22:56.574866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.130 [2024-11-20 07:22:56.574897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.130 [2024-11-20 07:22:56.575493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.130 [2024-11-20 07:22:56.576019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.130 [2024-11-20 07:22:56.576030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.130 [2024-11-20 07:22:56.576038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.130 [2024-11-20 07:22:56.576046] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.130 [2024-11-20 07:22:56.587448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.130 [2024-11-20 07:22:56.587897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.130 [2024-11-20 07:22:56.587915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.130 [2024-11-20 07:22:56.587923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.130 [2024-11-20 07:22:56.588102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.130 [2024-11-20 07:22:56.588276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.130 [2024-11-20 07:22:56.588284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.130 [2024-11-20 07:22:56.588290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.130 [2024-11-20 07:22:56.588297] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.130 [2024-11-20 07:22:56.600290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.130 [2024-11-20 07:22:56.600649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.130 [2024-11-20 07:22:56.600666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.130 [2024-11-20 07:22:56.600674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.130 [2024-11-20 07:22:56.600847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.130 [2024-11-20 07:22:56.601028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.130 [2024-11-20 07:22:56.601037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.130 [2024-11-20 07:22:56.601044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.130 [2024-11-20 07:22:56.601050] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.130 [2024-11-20 07:22:56.613154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.130 [2024-11-20 07:22:56.613601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.130 [2024-11-20 07:22:56.613617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.130 [2024-11-20 07:22:56.613624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.130 [2024-11-20 07:22:56.613796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.130 [2024-11-20 07:22:56.613977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.130 [2024-11-20 07:22:56.614002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.130 [2024-11-20 07:22:56.614009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.130 [2024-11-20 07:22:56.614016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.130 [2024-11-20 07:22:56.626224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.130 [2024-11-20 07:22:56.626586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.130 [2024-11-20 07:22:56.626603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.130 [2024-11-20 07:22:56.626611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.130 [2024-11-20 07:22:56.626789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.130 [2024-11-20 07:22:56.626973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.130 [2024-11-20 07:22:56.626982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.130 [2024-11-20 07:22:56.626989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.130 [2024-11-20 07:22:56.626995] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.130 [2024-11-20 07:22:56.639388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.130 [2024-11-20 07:22:56.639817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.130 [2024-11-20 07:22:56.639835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.130 [2024-11-20 07:22:56.639842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.130 [2024-11-20 07:22:56.640024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.130 [2024-11-20 07:22:56.640208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.130 [2024-11-20 07:22:56.640216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.130 [2024-11-20 07:22:56.640223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.130 [2024-11-20 07:22:56.640229] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.130 [2024-11-20 07:22:56.652491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.130 [2024-11-20 07:22:56.652915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.130 [2024-11-20 07:22:56.652972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.130 [2024-11-20 07:22:56.652997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.130 [2024-11-20 07:22:56.653577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.130 [2024-11-20 07:22:56.654039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.131 [2024-11-20 07:22:56.654047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.131 [2024-11-20 07:22:56.654057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.131 [2024-11-20 07:22:56.654063] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.131 [2024-11-20 07:22:56.665418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.131 [2024-11-20 07:22:56.665874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.131 [2024-11-20 07:22:56.665919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.131 [2024-11-20 07:22:56.665942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.131 [2024-11-20 07:22:56.666563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.131 [2024-11-20 07:22:56.666893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.131 [2024-11-20 07:22:56.666901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.131 [2024-11-20 07:22:56.666909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.131 [2024-11-20 07:22:56.666915] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.391 9943.33 IOPS, 38.84 MiB/s [2024-11-20T06:22:56.947Z] [2024-11-20 07:22:56.679414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.391 [2024-11-20 07:22:56.679757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.391 [2024-11-20 07:22:56.679775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.391 [2024-11-20 07:22:56.679782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.391 [2024-11-20 07:22:56.679961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.391 [2024-11-20 07:22:56.680134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.391 [2024-11-20 07:22:56.680142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.391 [2024-11-20 07:22:56.680148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.391 [2024-11-20 07:22:56.680154] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.391 [2024-11-20 07:22:56.692242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.391 [2024-11-20 07:22:56.692677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.391 [2024-11-20 07:22:56.692723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.391 [2024-11-20 07:22:56.692747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.391 [2024-11-20 07:22:56.693210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.391 [2024-11-20 07:22:56.693384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.391 [2024-11-20 07:22:56.693392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.391 [2024-11-20 07:22:56.693400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.391 [2024-11-20 07:22:56.693406] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.391 [2024-11-20 07:22:56.705096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.391 [2024-11-20 07:22:56.705494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.391 [2024-11-20 07:22:56.705510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.391 [2024-11-20 07:22:56.705517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.391 [2024-11-20 07:22:56.705680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.391 [2024-11-20 07:22:56.705842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.391 [2024-11-20 07:22:56.705850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.391 [2024-11-20 07:22:56.705855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.391 [2024-11-20 07:22:56.705861] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.391 [2024-11-20 07:22:56.718021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.391 [2024-11-20 07:22:56.718404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.391 [2024-11-20 07:22:56.718447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.392 [2024-11-20 07:22:56.718470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.392 [2024-11-20 07:22:56.718968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.392 [2024-11-20 07:22:56.719133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.392 [2024-11-20 07:22:56.719141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.392 [2024-11-20 07:22:56.719147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.392 [2024-11-20 07:22:56.719152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.392 [2024-11-20 07:22:56.730957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.392 [2024-11-20 07:22:56.731383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.392 [2024-11-20 07:22:56.731427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.392 [2024-11-20 07:22:56.731449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.392 [2024-11-20 07:22:56.731930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.392 [2024-11-20 07:22:56.732122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.392 [2024-11-20 07:22:56.732131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.392 [2024-11-20 07:22:56.732137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.392 [2024-11-20 07:22:56.732143] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.392 [2024-11-20 07:22:56.743764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.392 [2024-11-20 07:22:56.744192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.392 [2024-11-20 07:22:56.744238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.392 [2024-11-20 07:22:56.744269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.392 [2024-11-20 07:22:56.744848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.392 [2024-11-20 07:22:56.745324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.392 [2024-11-20 07:22:56.745333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.392 [2024-11-20 07:22:56.745339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.392 [2024-11-20 07:22:56.745346] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.392 [2024-11-20 07:22:56.756696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.392 [2024-11-20 07:22:56.757095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.392 [2024-11-20 07:22:56.757112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.392 [2024-11-20 07:22:56.757119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.392 [2024-11-20 07:22:56.757293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.392 [2024-11-20 07:22:56.757466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.392 [2024-11-20 07:22:56.757474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.392 [2024-11-20 07:22:56.757481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.392 [2024-11-20 07:22:56.757500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.392 [2024-11-20 07:22:56.769606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.392 [2024-11-20 07:22:56.769983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.392 [2024-11-20 07:22:56.770000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.392 [2024-11-20 07:22:56.770007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.392 [2024-11-20 07:22:56.770170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.392 [2024-11-20 07:22:56.770333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.392 [2024-11-20 07:22:56.770340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.392 [2024-11-20 07:22:56.770346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.392 [2024-11-20 07:22:56.770352] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.392 [2024-11-20 07:22:56.782501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.392 [2024-11-20 07:22:56.782893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.392 [2024-11-20 07:22:56.782929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.392 [2024-11-20 07:22:56.782968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.392 [2024-11-20 07:22:56.783491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.392 [2024-11-20 07:22:56.783667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.392 [2024-11-20 07:22:56.783675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.392 [2024-11-20 07:22:56.783682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.392 [2024-11-20 07:22:56.783688] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.392 [2024-11-20 07:22:56.795374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.392 [2024-11-20 07:22:56.795809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.392 [2024-11-20 07:22:56.795853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.392 [2024-11-20 07:22:56.795876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.392 [2024-11-20 07:22:56.796443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.392 [2024-11-20 07:22:56.796835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.392 [2024-11-20 07:22:56.796852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.392 [2024-11-20 07:22:56.796866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.392 [2024-11-20 07:22:56.796880] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.392 [2024-11-20 07:22:56.810473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.392 [2024-11-20 07:22:56.810898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.392 [2024-11-20 07:22:56.810919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.392 [2024-11-20 07:22:56.810929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.392 [2024-11-20 07:22:56.811190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.392 [2024-11-20 07:22:56.811446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.392 [2024-11-20 07:22:56.811456] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.392 [2024-11-20 07:22:56.811466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.392 [2024-11-20 07:22:56.811475] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.392 [2024-11-20 07:22:56.823465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.392 [2024-11-20 07:22:56.823869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.392 [2024-11-20 07:22:56.823886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.393 [2024-11-20 07:22:56.823893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.393 [2024-11-20 07:22:56.824072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.393 [2024-11-20 07:22:56.824245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.393 [2024-11-20 07:22:56.824253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.393 [2024-11-20 07:22:56.824265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.393 [2024-11-20 07:22:56.824272] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.393 [2024-11-20 07:22:56.836355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.393 [2024-11-20 07:22:56.836782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.393 [2024-11-20 07:22:56.836827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.393 [2024-11-20 07:22:56.836849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.393 [2024-11-20 07:22:56.837402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.393 [2024-11-20 07:22:56.837792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.393 [2024-11-20 07:22:56.837809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.393 [2024-11-20 07:22:56.837823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.393 [2024-11-20 07:22:56.837837] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.393 [2024-11-20 07:22:56.851260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.393 [2024-11-20 07:22:56.851758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.393 [2024-11-20 07:22:56.851779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.393 [2024-11-20 07:22:56.851789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.393 [2024-11-20 07:22:56.852049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.393 [2024-11-20 07:22:56.852305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.393 [2024-11-20 07:22:56.852316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.393 [2024-11-20 07:22:56.852325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.393 [2024-11-20 07:22:56.852334] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.393 [2024-11-20 07:22:56.864353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.393 [2024-11-20 07:22:56.864789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.393 [2024-11-20 07:22:56.864806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.393 [2024-11-20 07:22:56.864813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.393 [2024-11-20 07:22:56.864996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.393 [2024-11-20 07:22:56.865174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.393 [2024-11-20 07:22:56.865182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.393 [2024-11-20 07:22:56.865189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.393 [2024-11-20 07:22:56.865196] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.393 [2024-11-20 07:22:56.877428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.393 [2024-11-20 07:22:56.877842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.393 [2024-11-20 07:22:56.877859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.393 [2024-11-20 07:22:56.877867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.393 [2024-11-20 07:22:56.878049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.393 [2024-11-20 07:22:56.878228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.393 [2024-11-20 07:22:56.878236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.393 [2024-11-20 07:22:56.878243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.393 [2024-11-20 07:22:56.878250] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.393 [2024-11-20 07:22:56.890464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.393 [2024-11-20 07:22:56.890900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.393 [2024-11-20 07:22:56.890917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.393 [2024-11-20 07:22:56.890925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.393 [2024-11-20 07:22:56.891108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.393 [2024-11-20 07:22:56.891285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.393 [2024-11-20 07:22:56.891294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.393 [2024-11-20 07:22:56.891300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.393 [2024-11-20 07:22:56.891307] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.393 [2024-11-20 07:22:56.903313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.393 [2024-11-20 07:22:56.903735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.393 [2024-11-20 07:22:56.903752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.393 [2024-11-20 07:22:56.903759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.393 [2024-11-20 07:22:56.903931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.393 [2024-11-20 07:22:56.904109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.393 [2024-11-20 07:22:56.904118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.393 [2024-11-20 07:22:56.904125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.393 [2024-11-20 07:22:56.904131] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.393 [2024-11-20 07:22:56.916118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.393 [2024-11-20 07:22:56.916552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.393 [2024-11-20 07:22:56.916595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.393 [2024-11-20 07:22:56.916626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.393 [2024-11-20 07:22:56.917222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.393 [2024-11-20 07:22:56.917712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.393 [2024-11-20 07:22:56.917720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.393 [2024-11-20 07:22:56.917727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.393 [2024-11-20 07:22:56.917733] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.393 [2024-11-20 07:22:56.928963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.393 [2024-11-20 07:22:56.929389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.393 [2024-11-20 07:22:56.929406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.393 [2024-11-20 07:22:56.929413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.394 [2024-11-20 07:22:56.929584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.394 [2024-11-20 07:22:56.929758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.394 [2024-11-20 07:22:56.929766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.394 [2024-11-20 07:22:56.929772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.394 [2024-11-20 07:22:56.929779] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.654 [2024-11-20 07:22:56.941970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.654 [2024-11-20 07:22:56.942320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.654 [2024-11-20 07:22:56.942336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.654 [2024-11-20 07:22:56.942343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.654 [2024-11-20 07:22:56.942515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.654 [2024-11-20 07:22:56.942688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.654 [2024-11-20 07:22:56.942696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.654 [2024-11-20 07:22:56.942702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.654 [2024-11-20 07:22:56.942709] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.654 [2024-11-20 07:22:56.954871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.654 [2024-11-20 07:22:56.955230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.654 [2024-11-20 07:22:56.955247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.654 [2024-11-20 07:22:56.955254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.654 [2024-11-20 07:22:56.955427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.654 [2024-11-20 07:22:56.955603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.654 [2024-11-20 07:22:56.955611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.654 [2024-11-20 07:22:56.955617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.654 [2024-11-20 07:22:56.955623] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.654 [2024-11-20 07:22:56.967803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.654 [2024-11-20 07:22:56.968219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.654 [2024-11-20 07:22:56.968237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.654 [2024-11-20 07:22:56.968244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.654 [2024-11-20 07:22:56.968416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.654 [2024-11-20 07:22:56.968589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.654 [2024-11-20 07:22:56.968597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.654 [2024-11-20 07:22:56.968603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.654 [2024-11-20 07:22:56.968609] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.654 [2024-11-20 07:22:56.980612] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.654 [2024-11-20 07:22:56.981031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.654 [2024-11-20 07:22:56.981048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.654 [2024-11-20 07:22:56.981055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.654 [2024-11-20 07:22:56.981228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.654 [2024-11-20 07:22:56.981400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.654 [2024-11-20 07:22:56.981408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.654 [2024-11-20 07:22:56.981414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.654 [2024-11-20 07:22:56.981421] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.654 [2024-11-20 07:22:56.993475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.654 [2024-11-20 07:22:56.993879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.654 [2024-11-20 07:22:56.993923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.654 [2024-11-20 07:22:56.993961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.655 [2024-11-20 07:22:56.994541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.655 [2024-11-20 07:22:56.995014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.655 [2024-11-20 07:22:56.995022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.655 [2024-11-20 07:22:56.995032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.655 [2024-11-20 07:22:56.995039] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.655 [2024-11-20 07:22:57.006544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.655 [2024-11-20 07:22:57.006995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.655 [2024-11-20 07:22:57.007041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.655 [2024-11-20 07:22:57.007064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.655 [2024-11-20 07:22:57.007645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.655 [2024-11-20 07:22:57.007847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.655 [2024-11-20 07:22:57.007855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.655 [2024-11-20 07:22:57.007862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.655 [2024-11-20 07:22:57.007868] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.655 [2024-11-20 07:22:57.019405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.655 [2024-11-20 07:22:57.019787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.655 [2024-11-20 07:22:57.019832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.655 [2024-11-20 07:22:57.019855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.655 [2024-11-20 07:22:57.020450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.655 [2024-11-20 07:22:57.021010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.655 [2024-11-20 07:22:57.021018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.655 [2024-11-20 07:22:57.021024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.655 [2024-11-20 07:22:57.021031] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.655 [2024-11-20 07:22:57.032250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.655 [2024-11-20 07:22:57.032683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.655 [2024-11-20 07:22:57.032726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.655 [2024-11-20 07:22:57.032748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.655 [2024-11-20 07:22:57.033170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.655 [2024-11-20 07:22:57.033344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.655 [2024-11-20 07:22:57.033352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.655 [2024-11-20 07:22:57.033358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.655 [2024-11-20 07:22:57.033365] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.655 [2024-11-20 07:22:57.045046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.655 [2024-11-20 07:22:57.045437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.655 [2024-11-20 07:22:57.045453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.655 [2024-11-20 07:22:57.045460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.655 [2024-11-20 07:22:57.045622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.655 [2024-11-20 07:22:57.045784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.655 [2024-11-20 07:22:57.045792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.655 [2024-11-20 07:22:57.045797] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.655 [2024-11-20 07:22:57.045804] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.655 [2024-11-20 07:22:57.058054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.655 [2024-11-20 07:22:57.058495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.655 [2024-11-20 07:22:57.058511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.655 [2024-11-20 07:22:57.058518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.655 [2024-11-20 07:22:57.058690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.655 [2024-11-20 07:22:57.058862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.655 [2024-11-20 07:22:57.058870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.655 [2024-11-20 07:22:57.058876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.655 [2024-11-20 07:22:57.058882] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.655 [2024-11-20 07:22:57.070938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.655 [2024-11-20 07:22:57.071357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.655 [2024-11-20 07:22:57.071373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.655 [2024-11-20 07:22:57.071380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.655 [2024-11-20 07:22:57.071543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.655 [2024-11-20 07:22:57.071707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.655 [2024-11-20 07:22:57.071714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.655 [2024-11-20 07:22:57.071720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.655 [2024-11-20 07:22:57.071726] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.655 [2024-11-20 07:22:57.083884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.655 [2024-11-20 07:22:57.084284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.655 [2024-11-20 07:22:57.084301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.655 [2024-11-20 07:22:57.084311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.655 [2024-11-20 07:22:57.084484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.655 [2024-11-20 07:22:57.084658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.655 [2024-11-20 07:22:57.084666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.655 [2024-11-20 07:22:57.084672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.655 [2024-11-20 07:22:57.084679] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.655 [2024-11-20 07:22:57.096730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.655 [2024-11-20 07:22:57.097141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.655 [2024-11-20 07:22:57.097187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.655 [2024-11-20 07:22:57.097210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.656 [2024-11-20 07:22:57.097790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.656 [2024-11-20 07:22:57.098237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.656 [2024-11-20 07:22:57.098246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.656 [2024-11-20 07:22:57.098253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.656 [2024-11-20 07:22:57.098259] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.656 [2024-11-20 07:22:57.109757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.656 [2024-11-20 07:22:57.110185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.656 [2024-11-20 07:22:57.110231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.656 [2024-11-20 07:22:57.110254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.656 [2024-11-20 07:22:57.110783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.656 [2024-11-20 07:22:57.111157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.656 [2024-11-20 07:22:57.111173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.656 [2024-11-20 07:22:57.111187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.656 [2024-11-20 07:22:57.111200] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.656 [2024-11-20 07:22:57.124395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.656 [2024-11-20 07:22:57.124888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.656 [2024-11-20 07:22:57.124909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.656 [2024-11-20 07:22:57.124919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.656 [2024-11-20 07:22:57.125169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.656 [2024-11-20 07:22:57.125418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.656 [2024-11-20 07:22:57.125429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.656 [2024-11-20 07:22:57.125438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.656 [2024-11-20 07:22:57.125446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.656 [2024-11-20 07:22:57.137463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.656 [2024-11-20 07:22:57.137834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.656 [2024-11-20 07:22:57.137878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.656 [2024-11-20 07:22:57.137901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.656 [2024-11-20 07:22:57.138158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.656 [2024-11-20 07:22:57.138336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.656 [2024-11-20 07:22:57.138344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.656 [2024-11-20 07:22:57.138350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.656 [2024-11-20 07:22:57.138357] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.656 [2024-11-20 07:22:57.150573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.656 [2024-11-20 07:22:57.150980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.656 [2024-11-20 07:22:57.150997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.656 [2024-11-20 07:22:57.151004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.656 [2024-11-20 07:22:57.151176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.656 [2024-11-20 07:22:57.151348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.656 [2024-11-20 07:22:57.151356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.656 [2024-11-20 07:22:57.151363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.656 [2024-11-20 07:22:57.151369] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.656 [2024-11-20 07:22:57.163503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.656 [2024-11-20 07:22:57.163898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.656 [2024-11-20 07:22:57.163914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.656 [2024-11-20 07:22:57.163920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.656 [2024-11-20 07:22:57.164111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.656 [2024-11-20 07:22:57.164283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.656 [2024-11-20 07:22:57.164291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.656 [2024-11-20 07:22:57.164300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.656 [2024-11-20 07:22:57.164306] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.656 [2024-11-20 07:22:57.176314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.656 [2024-11-20 07:22:57.176705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.656 [2024-11-20 07:22:57.176722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.656 [2024-11-20 07:22:57.176729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.656 [2024-11-20 07:22:57.176891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.656 [2024-11-20 07:22:57.177080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.656 [2024-11-20 07:22:57.177089] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.656 [2024-11-20 07:22:57.177095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.656 [2024-11-20 07:22:57.177101] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.656 [2024-11-20 07:22:57.189138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.656 [2024-11-20 07:22:57.189527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.656 [2024-11-20 07:22:57.189543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.656 [2024-11-20 07:22:57.189550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.656 [2024-11-20 07:22:57.189713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.656 [2024-11-20 07:22:57.189875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.656 [2024-11-20 07:22:57.189882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.656 [2024-11-20 07:22:57.189888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.656 [2024-11-20 07:22:57.189894] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.656 [2024-11-20 07:22:57.202214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.656 [2024-11-20 07:22:57.202629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.656 [2024-11-20 07:22:57.202647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.656 [2024-11-20 07:22:57.202654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.917 [2024-11-20 07:22:57.202831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.917 [2024-11-20 07:22:57.203016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.917 [2024-11-20 07:22:57.203025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.917 [2024-11-20 07:22:57.203032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.917 [2024-11-20 07:22:57.203038] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.917 [2024-11-20 07:22:57.215129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.917 [2024-11-20 07:22:57.215519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.917 [2024-11-20 07:22:57.215536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.917 [2024-11-20 07:22:57.215543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.917 [2024-11-20 07:22:57.215705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.917 [2024-11-20 07:22:57.215867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.917 [2024-11-20 07:22:57.215875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.917 [2024-11-20 07:22:57.215881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.917 [2024-11-20 07:22:57.215887] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.917 [2024-11-20 07:22:57.227998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.918 [2024-11-20 07:22:57.228421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.918 [2024-11-20 07:22:57.228437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.918 [2024-11-20 07:22:57.228444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.918 [2024-11-20 07:22:57.228616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.918 [2024-11-20 07:22:57.228791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.918 [2024-11-20 07:22:57.228799] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.918 [2024-11-20 07:22:57.228806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.918 [2024-11-20 07:22:57.228812] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.918 [2024-11-20 07:22:57.240800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.918 [2024-11-20 07:22:57.241217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.918 [2024-11-20 07:22:57.241235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.918 [2024-11-20 07:22:57.241242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.918 [2024-11-20 07:22:57.241414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.918 [2024-11-20 07:22:57.241585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.918 [2024-11-20 07:22:57.241593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.918 [2024-11-20 07:22:57.241599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.918 [2024-11-20 07:22:57.241606] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.918 [2024-11-20 07:22:57.253608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.918 [2024-11-20 07:22:57.254000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.918 [2024-11-20 07:22:57.254017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.918 [2024-11-20 07:22:57.254027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.918 [2024-11-20 07:22:57.254200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.918 [2024-11-20 07:22:57.254373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.918 [2024-11-20 07:22:57.254381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.918 [2024-11-20 07:22:57.254387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.918 [2024-11-20 07:22:57.254393] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.918 [2024-11-20 07:22:57.266511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.918 [2024-11-20 07:22:57.266958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.918 [2024-11-20 07:22:57.266976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.918 [2024-11-20 07:22:57.266983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.918 [2024-11-20 07:22:57.267154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.918 [2024-11-20 07:22:57.267328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.918 [2024-11-20 07:22:57.267337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.918 [2024-11-20 07:22:57.267343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.918 [2024-11-20 07:22:57.267349] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.918 [2024-11-20 07:22:57.279480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.918 [2024-11-20 07:22:57.279937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.918 [2024-11-20 07:22:57.279992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.918 [2024-11-20 07:22:57.280015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.918 [2024-11-20 07:22:57.280495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.918 [2024-11-20 07:22:57.280669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.918 [2024-11-20 07:22:57.280677] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.918 [2024-11-20 07:22:57.280683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.918 [2024-11-20 07:22:57.280689] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.918 [2024-11-20 07:22:57.292370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.918 [2024-11-20 07:22:57.292797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.918 [2024-11-20 07:22:57.292813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.918 [2024-11-20 07:22:57.292820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.918 [2024-11-20 07:22:57.292996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.918 [2024-11-20 07:22:57.293172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.918 [2024-11-20 07:22:57.293181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.918 [2024-11-20 07:22:57.293188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.918 [2024-11-20 07:22:57.293194] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.918 [2024-11-20 07:22:57.305399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.918 [2024-11-20 07:22:57.305849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.918 [2024-11-20 07:22:57.305891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.918 [2024-11-20 07:22:57.305914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.918 [2024-11-20 07:22:57.306384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.918 [2024-11-20 07:22:57.306558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.918 [2024-11-20 07:22:57.306567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.918 [2024-11-20 07:22:57.306573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.918 [2024-11-20 07:22:57.306580] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.918 [2024-11-20 07:22:57.318530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.918 [2024-11-20 07:22:57.318965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.918 [2024-11-20 07:22:57.318983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.918 [2024-11-20 07:22:57.318990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.918 [2024-11-20 07:22:57.319168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.918 [2024-11-20 07:22:57.319346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.918 [2024-11-20 07:22:57.319355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.918 [2024-11-20 07:22:57.319361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.918 [2024-11-20 07:22:57.319368] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.918 [2024-11-20 07:22:57.331557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.918 [2024-11-20 07:22:57.331988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.919 [2024-11-20 07:22:57.332005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.919 [2024-11-20 07:22:57.332013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.919 [2024-11-20 07:22:57.332184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.919 [2024-11-20 07:22:57.332357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.919 [2024-11-20 07:22:57.332366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.919 [2024-11-20 07:22:57.332375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.919 [2024-11-20 07:22:57.332382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.919 [2024-11-20 07:22:57.344542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.919 [2024-11-20 07:22:57.344980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.919 [2024-11-20 07:22:57.344997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.919 [2024-11-20 07:22:57.345003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.919 [2024-11-20 07:22:57.345184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.919 [2024-11-20 07:22:57.345347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.919 [2024-11-20 07:22:57.345355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.919 [2024-11-20 07:22:57.345361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.919 [2024-11-20 07:22:57.345367] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.919 [2024-11-20 07:22:57.357486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.919 [2024-11-20 07:22:57.357941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.919 [2024-11-20 07:22:57.358000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.919 [2024-11-20 07:22:57.358023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.919 [2024-11-20 07:22:57.358547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.919 [2024-11-20 07:22:57.358720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.919 [2024-11-20 07:22:57.358729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.919 [2024-11-20 07:22:57.358735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.919 [2024-11-20 07:22:57.358742] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.919 [2024-11-20 07:22:57.370461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.919 [2024-11-20 07:22:57.370859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.919 [2024-11-20 07:22:57.370875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.919 [2024-11-20 07:22:57.370882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.919 [2024-11-20 07:22:57.371057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.919 [2024-11-20 07:22:57.371235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.919 [2024-11-20 07:22:57.371244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.919 [2024-11-20 07:22:57.371250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.919 [2024-11-20 07:22:57.371269] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.919 [2024-11-20 07:22:57.383352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.919 [2024-11-20 07:22:57.383739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.919 [2024-11-20 07:22:57.383756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.919 [2024-11-20 07:22:57.383763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.919 [2024-11-20 07:22:57.383940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.919 [2024-11-20 07:22:57.384222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.919 [2024-11-20 07:22:57.384232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.919 [2024-11-20 07:22:57.384239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.919 [2024-11-20 07:22:57.384246] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.919 [2024-11-20 07:22:57.396480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.919 [2024-11-20 07:22:57.396933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.919 [2024-11-20 07:22:57.396955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.919 [2024-11-20 07:22:57.396963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.919 [2024-11-20 07:22:57.397141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.919 [2024-11-20 07:22:57.397318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.919 [2024-11-20 07:22:57.397327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.919 [2024-11-20 07:22:57.397334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.919 [2024-11-20 07:22:57.397340] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.919 [2024-11-20 07:22:57.409516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.919 [2024-11-20 07:22:57.409882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.919 [2024-11-20 07:22:57.409898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.919 [2024-11-20 07:22:57.409905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.919 [2024-11-20 07:22:57.410085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.919 [2024-11-20 07:22:57.410259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.919 [2024-11-20 07:22:57.410266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.919 [2024-11-20 07:22:57.410272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.919 [2024-11-20 07:22:57.410279] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.919 [2024-11-20 07:22:57.422565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.919 [2024-11-20 07:22:57.422946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.919 [2024-11-20 07:22:57.423004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.919 [2024-11-20 07:22:57.423035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.919 [2024-11-20 07:22:57.423598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.919 [2024-11-20 07:22:57.423771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.919 [2024-11-20 07:22:57.423779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.919 [2024-11-20 07:22:57.423785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.919 [2024-11-20 07:22:57.423792] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.919 [2024-11-20 07:22:57.437736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.919 [2024-11-20 07:22:57.438193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.919 [2024-11-20 07:22:57.438215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.920 [2024-11-20 07:22:57.438225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.920 [2024-11-20 07:22:57.438478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.920 [2024-11-20 07:22:57.438733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.920 [2024-11-20 07:22:57.438745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.920 [2024-11-20 07:22:57.438754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.920 [2024-11-20 07:22:57.438763] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.920 [2024-11-20 07:22:57.450737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.920 [2024-11-20 07:22:57.451104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.920 [2024-11-20 07:22:57.451147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.920 [2024-11-20 07:22:57.451170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.920 [2024-11-20 07:22:57.451679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.920 [2024-11-20 07:22:57.451851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.920 [2024-11-20 07:22:57.451859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.920 [2024-11-20 07:22:57.451866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.920 [2024-11-20 07:22:57.451872] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:52.920 [2024-11-20 07:22:57.463891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:52.920 [2024-11-20 07:22:57.464264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.920 [2024-11-20 07:22:57.464282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:52.920 [2024-11-20 07:22:57.464289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:52.920 [2024-11-20 07:22:57.464467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:52.920 [2024-11-20 07:22:57.464651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:52.920 [2024-11-20 07:22:57.464659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:52.920 [2024-11-20 07:22:57.464666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:52.920 [2024-11-20 07:22:57.464673] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.180 [2024-11-20 07:22:57.476858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.180 [2024-11-20 07:22:57.477243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.180 [2024-11-20 07:22:57.477260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.180 [2024-11-20 07:22:57.477268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.180 [2024-11-20 07:22:57.477439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.180 [2024-11-20 07:22:57.477611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.180 [2024-11-20 07:22:57.477620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.180 [2024-11-20 07:22:57.477626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.180 [2024-11-20 07:22:57.477633] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.180 [2024-11-20 07:22:57.489855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.180 [2024-11-20 07:22:57.490256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.180 [2024-11-20 07:22:57.490273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.180 [2024-11-20 07:22:57.490280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.180 [2024-11-20 07:22:57.490453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.180 [2024-11-20 07:22:57.490626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.180 [2024-11-20 07:22:57.490634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.180 [2024-11-20 07:22:57.490640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.180 [2024-11-20 07:22:57.490646] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.180 [2024-11-20 07:22:57.502837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.180 [2024-11-20 07:22:57.503193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.180 [2024-11-20 07:22:57.503252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.180 [2024-11-20 07:22:57.503277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.180 [2024-11-20 07:22:57.503859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.180 [2024-11-20 07:22:57.504336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.180 [2024-11-20 07:22:57.504344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.180 [2024-11-20 07:22:57.504354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.180 [2024-11-20 07:22:57.504361] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.180 [2024-11-20 07:22:57.515745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.180 [2024-11-20 07:22:57.516192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.180 [2024-11-20 07:22:57.516210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.180 [2024-11-20 07:22:57.516217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.180 [2024-11-20 07:22:57.516390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.180 [2024-11-20 07:22:57.516562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.180 [2024-11-20 07:22:57.516570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.180 [2024-11-20 07:22:57.516577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.180 [2024-11-20 07:22:57.516583] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.180 [2024-11-20 07:22:57.528771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.180 [2024-11-20 07:22:57.529131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.180 [2024-11-20 07:22:57.529148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.180 [2024-11-20 07:22:57.529155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.180 [2024-11-20 07:22:57.529327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.180 [2024-11-20 07:22:57.529499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.181 [2024-11-20 07:22:57.529507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.181 [2024-11-20 07:22:57.529513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.181 [2024-11-20 07:22:57.529520] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.181 [2024-11-20 07:22:57.541834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.181 [2024-11-20 07:22:57.542129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.181 [2024-11-20 07:22:57.542145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.181 [2024-11-20 07:22:57.542152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.181 [2024-11-20 07:22:57.542325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.181 [2024-11-20 07:22:57.542497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.181 [2024-11-20 07:22:57.542505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.181 [2024-11-20 07:22:57.542511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.181 [2024-11-20 07:22:57.542517] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.181 [2024-11-20 07:22:57.554746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.181 [2024-11-20 07:22:57.555126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.181 [2024-11-20 07:22:57.555142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.181 [2024-11-20 07:22:57.555149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.181 [2024-11-20 07:22:57.555321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.181 [2024-11-20 07:22:57.555493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.181 [2024-11-20 07:22:57.555502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.181 [2024-11-20 07:22:57.555508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.181 [2024-11-20 07:22:57.555515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.181 [2024-11-20 07:22:57.567721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.181 [2024-11-20 07:22:57.567998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.181 [2024-11-20 07:22:57.568015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.181 [2024-11-20 07:22:57.568022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.181 [2024-11-20 07:22:57.568194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.181 [2024-11-20 07:22:57.568374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.181 [2024-11-20 07:22:57.568382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.181 [2024-11-20 07:22:57.568388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.181 [2024-11-20 07:22:57.568394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.181 [2024-11-20 07:22:57.580754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.181 [2024-11-20 07:22:57.581125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.181 [2024-11-20 07:22:57.581142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.181 [2024-11-20 07:22:57.581150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.181 [2024-11-20 07:22:57.581321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.181 [2024-11-20 07:22:57.581494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.181 [2024-11-20 07:22:57.581502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.181 [2024-11-20 07:22:57.581508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.181 [2024-11-20 07:22:57.581514] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.181 [2024-11-20 07:22:57.593791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.181 [2024-11-20 07:22:57.594167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.181 [2024-11-20 07:22:57.594184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.181 [2024-11-20 07:22:57.594194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.181 [2024-11-20 07:22:57.594365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.181 [2024-11-20 07:22:57.594539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.181 [2024-11-20 07:22:57.594547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.181 [2024-11-20 07:22:57.594553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.181 [2024-11-20 07:22:57.594560] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.181 [2024-11-20 07:22:57.606899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.181 [2024-11-20 07:22:57.607201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.181 [2024-11-20 07:22:57.607218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.181 [2024-11-20 07:22:57.607225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.181 [2024-11-20 07:22:57.607402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.181 [2024-11-20 07:22:57.607582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.181 [2024-11-20 07:22:57.607590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.181 [2024-11-20 07:22:57.607597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.181 [2024-11-20 07:22:57.607604] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.181 [2024-11-20 07:22:57.620025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.181 [2024-11-20 07:22:57.620460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.181 [2024-11-20 07:22:57.620477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.181 [2024-11-20 07:22:57.620485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.181 [2024-11-20 07:22:57.620662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.181 [2024-11-20 07:22:57.620841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.181 [2024-11-20 07:22:57.620850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.181 [2024-11-20 07:22:57.620857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.181 [2024-11-20 07:22:57.620863] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.181 [2024-11-20 07:22:57.633096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.181 [2024-11-20 07:22:57.633453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.181 [2024-11-20 07:22:57.633470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.181 [2024-11-20 07:22:57.633478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.181 [2024-11-20 07:22:57.633655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.181 [2024-11-20 07:22:57.633837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.181 [2024-11-20 07:22:57.633846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.182 [2024-11-20 07:22:57.633853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.182 [2024-11-20 07:22:57.633860] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.182 [2024-11-20 07:22:57.646245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.182 [2024-11-20 07:22:57.646659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.182 [2024-11-20 07:22:57.646676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.182 [2024-11-20 07:22:57.646683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.182 [2024-11-20 07:22:57.646860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.182 [2024-11-20 07:22:57.647087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.182 [2024-11-20 07:22:57.647097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.182 [2024-11-20 07:22:57.647104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.182 [2024-11-20 07:22:57.647111] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.182 [2024-11-20 07:22:57.659339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.182 [2024-11-20 07:22:57.659684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.182 [2024-11-20 07:22:57.659701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.182 [2024-11-20 07:22:57.659708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.182 [2024-11-20 07:22:57.659885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.182 [2024-11-20 07:22:57.660072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.182 [2024-11-20 07:22:57.660081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.182 [2024-11-20 07:22:57.660088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.182 [2024-11-20 07:22:57.660094] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.182 [2024-11-20 07:22:57.672458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.182 [2024-11-20 07:22:57.672847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.182 [2024-11-20 07:22:57.672863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.182 [2024-11-20 07:22:57.672870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.182 [2024-11-20 07:22:57.673050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.182 [2024-11-20 07:22:57.673224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.182 [2024-11-20 07:22:57.673232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.182 [2024-11-20 07:22:57.673242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.182 [2024-11-20 07:22:57.673248] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.182 7457.50 IOPS, 29.13 MiB/s [2024-11-20T06:22:57.738Z] [2024-11-20 07:22:57.685420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.182 [2024-11-20 07:22:57.685704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.182 [2024-11-20 07:22:57.685721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.182 [2024-11-20 07:22:57.685728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.182 [2024-11-20 07:22:57.685899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.182 [2024-11-20 07:22:57.686079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.182 [2024-11-20 07:22:57.686088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.182 [2024-11-20 07:22:57.686094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.182 [2024-11-20 07:22:57.686101] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.182 [2024-11-20 07:22:57.698397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.182 [2024-11-20 07:22:57.698697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.182 [2024-11-20 07:22:57.698713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.182 [2024-11-20 07:22:57.698720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.182 [2024-11-20 07:22:57.698892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.182 [2024-11-20 07:22:57.699072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.182 [2024-11-20 07:22:57.699081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.182 [2024-11-20 07:22:57.699088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.182 [2024-11-20 07:22:57.699094] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.182 [2024-11-20 07:22:57.711458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.182 [2024-11-20 07:22:57.711814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.182 [2024-11-20 07:22:57.711832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.182 [2024-11-20 07:22:57.711839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.182 [2024-11-20 07:22:57.712019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.182 [2024-11-20 07:22:57.712192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.182 [2024-11-20 07:22:57.712201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.182 [2024-11-20 07:22:57.712207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.182 [2024-11-20 07:22:57.712213] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.182 [2024-11-20 07:22:57.724378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.182 [2024-11-20 07:22:57.724824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.182 [2024-11-20 07:22:57.724841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.182 [2024-11-20 07:22:57.724848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.182 [2024-11-20 07:22:57.725048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.182 [2024-11-20 07:22:57.725226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.182 [2024-11-20 07:22:57.725235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.182 [2024-11-20 07:22:57.725241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.182 [2024-11-20 07:22:57.725248] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.443 [2024-11-20 07:22:57.737442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.443 [2024-11-20 07:22:57.737826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.444 [2024-11-20 07:22:57.737872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.444 [2024-11-20 07:22:57.737896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.444 [2024-11-20 07:22:57.738493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.444 [2024-11-20 07:22:57.738894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.444 [2024-11-20 07:22:57.738902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.444 [2024-11-20 07:22:57.738908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.444 [2024-11-20 07:22:57.738915] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.444 [2024-11-20 07:22:57.750239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.444 [2024-11-20 07:22:57.750689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.444 [2024-11-20 07:22:57.750706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.444 [2024-11-20 07:22:57.750713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.444 [2024-11-20 07:22:57.750885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.444 [2024-11-20 07:22:57.751067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.444 [2024-11-20 07:22:57.751076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.444 [2024-11-20 07:22:57.751082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.444 [2024-11-20 07:22:57.751088] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.444 [2024-11-20 07:22:57.763038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.444 [2024-11-20 07:22:57.763399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.444 [2024-11-20 07:22:57.763416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.444 [2024-11-20 07:22:57.763426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.444 [2024-11-20 07:22:57.763599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.444 [2024-11-20 07:22:57.763773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.444 [2024-11-20 07:22:57.763781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.444 [2024-11-20 07:22:57.763787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.444 [2024-11-20 07:22:57.763793] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.444 [2024-11-20 07:22:57.776063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.444 [2024-11-20 07:22:57.776509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.444 [2024-11-20 07:22:57.776526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.444 [2024-11-20 07:22:57.776533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.444 [2024-11-20 07:22:57.776706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.444 [2024-11-20 07:22:57.776878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.444 [2024-11-20 07:22:57.776886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.444 [2024-11-20 07:22:57.776892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.444 [2024-11-20 07:22:57.776898] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.444 [2024-11-20 07:22:57.788904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.444 [2024-11-20 07:22:57.789351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.444 [2024-11-20 07:22:57.789368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.444 [2024-11-20 07:22:57.789375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.444 [2024-11-20 07:22:57.789547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.444 [2024-11-20 07:22:57.789719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.444 [2024-11-20 07:22:57.789727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.444 [2024-11-20 07:22:57.789733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.444 [2024-11-20 07:22:57.789740] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.444 [2024-11-20 07:22:57.801735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.444 [2024-11-20 07:22:57.802066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.444 [2024-11-20 07:22:57.802082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.444 [2024-11-20 07:22:57.802089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.444 [2024-11-20 07:22:57.802251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.444 [2024-11-20 07:22:57.802418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.444 [2024-11-20 07:22:57.802425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.444 [2024-11-20 07:22:57.802431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.444 [2024-11-20 07:22:57.802437] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.444 [2024-11-20 07:22:57.814615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.444 [2024-11-20 07:22:57.815023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.444 [2024-11-20 07:22:57.815069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.444 [2024-11-20 07:22:57.815092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.444 [2024-11-20 07:22:57.815672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.444 [2024-11-20 07:22:57.815920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.444 [2024-11-20 07:22:57.815927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.444 [2024-11-20 07:22:57.815933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.444 [2024-11-20 07:22:57.815939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.444 [2024-11-20 07:22:57.827542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.444 [2024-11-20 07:22:57.827967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.444 [2024-11-20 07:22:57.827983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.444 [2024-11-20 07:22:57.827990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.444 [2024-11-20 07:22:57.828152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.444 [2024-11-20 07:22:57.828315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.444 [2024-11-20 07:22:57.828322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.444 [2024-11-20 07:22:57.828328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.444 [2024-11-20 07:22:57.828334] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.445 [2024-11-20 07:22:57.840387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.445 [2024-11-20 07:22:57.840810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.445 [2024-11-20 07:22:57.840825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.445 [2024-11-20 07:22:57.840832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.445 [2024-11-20 07:22:57.841016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.445 [2024-11-20 07:22:57.841190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.445 [2024-11-20 07:22:57.841198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.445 [2024-11-20 07:22:57.841207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.445 [2024-11-20 07:22:57.841214] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.445 [2024-11-20 07:22:57.853207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.445 [2024-11-20 07:22:57.853624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.445 [2024-11-20 07:22:57.853640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.445 [2024-11-20 07:22:57.853646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.445 [2024-11-20 07:22:57.853808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.445 [2024-11-20 07:22:57.853977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.445 [2024-11-20 07:22:57.853985] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.445 [2024-11-20 07:22:57.853991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.445 [2024-11-20 07:22:57.853997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.445 [2024-11-20 07:22:57.866038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.445 [2024-11-20 07:22:57.866461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.445 [2024-11-20 07:22:57.866477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.445 [2024-11-20 07:22:57.866484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.445 [2024-11-20 07:22:57.866646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.445 [2024-11-20 07:22:57.866808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.445 [2024-11-20 07:22:57.866816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.445 [2024-11-20 07:22:57.866822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.445 [2024-11-20 07:22:57.866828] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.445 [2024-11-20 07:22:57.878937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.445 [2024-11-20 07:22:57.879307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.445 [2024-11-20 07:22:57.879323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.445 [2024-11-20 07:22:57.879330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.445 [2024-11-20 07:22:57.879502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.445 [2024-11-20 07:22:57.879675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.445 [2024-11-20 07:22:57.879683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.445 [2024-11-20 07:22:57.879690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.445 [2024-11-20 07:22:57.879697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.445 [2024-11-20 07:22:57.891858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.445 [2024-11-20 07:22:57.892297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.445 [2024-11-20 07:22:57.892314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.445 [2024-11-20 07:22:57.892321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.445 [2024-11-20 07:22:57.892499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.445 [2024-11-20 07:22:57.892677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.445 [2024-11-20 07:22:57.892685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.445 [2024-11-20 07:22:57.892691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.445 [2024-11-20 07:22:57.892698] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.445 [2024-11-20 07:22:57.904934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.445 [2024-11-20 07:22:57.905301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.445 [2024-11-20 07:22:57.905318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.445 [2024-11-20 07:22:57.905325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.445 [2024-11-20 07:22:57.905503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.445 [2024-11-20 07:22:57.905681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.445 [2024-11-20 07:22:57.905690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.445 [2024-11-20 07:22:57.905697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.445 [2024-11-20 07:22:57.905704] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.445 [2024-11-20 07:22:57.918001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.445 [2024-11-20 07:22:57.918350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.446 [2024-11-20 07:22:57.918366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.446 [2024-11-20 07:22:57.918373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.446 [2024-11-20 07:22:57.918550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.446 [2024-11-20 07:22:57.918730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.446 [2024-11-20 07:22:57.918739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.446 [2024-11-20 07:22:57.918745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.446 [2024-11-20 07:22:57.918752] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.446 [2024-11-20 07:22:57.930915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.446 [2024-11-20 07:22:57.931339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.446 [2024-11-20 07:22:57.931355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.446 [2024-11-20 07:22:57.931365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.446 [2024-11-20 07:22:57.931528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.446 [2024-11-20 07:22:57.931691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.446 [2024-11-20 07:22:57.931698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.446 [2024-11-20 07:22:57.931704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.446 [2024-11-20 07:22:57.931710] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.446 [2024-11-20 07:22:57.943826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.446 [2024-11-20 07:22:57.944272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.446 [2024-11-20 07:22:57.944289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.446 [2024-11-20 07:22:57.944297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.446 [2024-11-20 07:22:57.944469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.446 [2024-11-20 07:22:57.944642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.446 [2024-11-20 07:22:57.944651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.446 [2024-11-20 07:22:57.944657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.446 [2024-11-20 07:22:57.944663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.446 [2024-11-20 07:22:57.956724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.446 [2024-11-20 07:22:57.957171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.446 [2024-11-20 07:22:57.957188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.446 [2024-11-20 07:22:57.957195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.446 [2024-11-20 07:22:57.957371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.446 [2024-11-20 07:22:57.957534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.446 [2024-11-20 07:22:57.957542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.446 [2024-11-20 07:22:57.957548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.446 [2024-11-20 07:22:57.957554] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.446 [2024-11-20 07:22:57.969619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.446 [2024-11-20 07:22:57.970014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.446 [2024-11-20 07:22:57.970031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.446 [2024-11-20 07:22:57.970038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.446 [2024-11-20 07:22:57.970202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.446 [2024-11-20 07:22:57.970367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.446 [2024-11-20 07:22:57.970375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.446 [2024-11-20 07:22:57.970381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.446 [2024-11-20 07:22:57.970387] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.446 [2024-11-20 07:22:57.982451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.446 [2024-11-20 07:22:57.982870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.446 [2024-11-20 07:22:57.982886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.446 [2024-11-20 07:22:57.982894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.446 [2024-11-20 07:22:57.983083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.446 [2024-11-20 07:22:57.983255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.446 [2024-11-20 07:22:57.983264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.446 [2024-11-20 07:22:57.983270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.446 [2024-11-20 07:22:57.983276] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.707 [2024-11-20 07:22:57.995457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.707 [2024-11-20 07:22:57.995797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.707 [2024-11-20 07:22:57.995814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.708 [2024-11-20 07:22:57.995821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.708 [2024-11-20 07:22:57.995999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.708 [2024-11-20 07:22:57.996173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.708 [2024-11-20 07:22:57.996181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.708 [2024-11-20 07:22:57.996188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.708 [2024-11-20 07:22:57.996194] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.708 [2024-11-20 07:22:58.008298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.708 [2024-11-20 07:22:58.008720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.708 [2024-11-20 07:22:58.008737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.708 [2024-11-20 07:22:58.008744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.708 [2024-11-20 07:22:58.008916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.708 [2024-11-20 07:22:58.009094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.708 [2024-11-20 07:22:58.009103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.708 [2024-11-20 07:22:58.009112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.708 [2024-11-20 07:22:58.009119] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.708 [2024-11-20 07:22:58.021112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.708 [2024-11-20 07:22:58.021525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.708 [2024-11-20 07:22:58.021542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.708 [2024-11-20 07:22:58.021548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.708 [2024-11-20 07:22:58.021711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.708 [2024-11-20 07:22:58.021874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.708 [2024-11-20 07:22:58.021881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.708 [2024-11-20 07:22:58.021887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.708 [2024-11-20 07:22:58.021893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.708 [2024-11-20 07:22:58.033994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.708 [2024-11-20 07:22:58.034444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.708 [2024-11-20 07:22:58.034500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.708 [2024-11-20 07:22:58.034523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.708 [2024-11-20 07:22:58.035101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.708 [2024-11-20 07:22:58.035274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.708 [2024-11-20 07:22:58.035282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.708 [2024-11-20 07:22:58.035289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.708 [2024-11-20 07:22:58.035295] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.708 [2024-11-20 07:22:58.046790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.708 [2024-11-20 07:22:58.047229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.708 [2024-11-20 07:22:58.047246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.708 [2024-11-20 07:22:58.047253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.708 [2024-11-20 07:22:58.047424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.708 [2024-11-20 07:22:58.047596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.708 [2024-11-20 07:22:58.047604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.708 [2024-11-20 07:22:58.047610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.708 [2024-11-20 07:22:58.047616] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.708 [2024-11-20 07:22:58.059677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.708 [2024-11-20 07:22:58.059993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.708 [2024-11-20 07:22:58.060009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.708 [2024-11-20 07:22:58.060015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.708 [2024-11-20 07:22:58.060178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.708 [2024-11-20 07:22:58.060341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.708 [2024-11-20 07:22:58.060348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.708 [2024-11-20 07:22:58.060354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.708 [2024-11-20 07:22:58.060361] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.708 [2024-11-20 07:22:58.072564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.708 [2024-11-20 07:22:58.072898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.708 [2024-11-20 07:22:58.072914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.708 [2024-11-20 07:22:58.072921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.708 [2024-11-20 07:22:58.073112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.708 [2024-11-20 07:22:58.073286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.708 [2024-11-20 07:22:58.073294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.708 [2024-11-20 07:22:58.073300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.708 [2024-11-20 07:22:58.073306] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.708 [2024-11-20 07:22:58.085360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.708 [2024-11-20 07:22:58.085788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.708 [2024-11-20 07:22:58.085804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.708 [2024-11-20 07:22:58.085810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.708 [2024-11-20 07:22:58.085995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.708 [2024-11-20 07:22:58.086168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.708 [2024-11-20 07:22:58.086176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.708 [2024-11-20 07:22:58.086182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.708 [2024-11-20 07:22:58.086189] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.708 [2024-11-20 07:22:58.098179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.708 [2024-11-20 07:22:58.098605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.708 [2024-11-20 07:22:58.098621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.708 [2024-11-20 07:22:58.098633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.709 [2024-11-20 07:22:58.098795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.709 [2024-11-20 07:22:58.098962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.709 [2024-11-20 07:22:58.098970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.709 [2024-11-20 07:22:58.098993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.709 [2024-11-20 07:22:58.098999] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.709 [2024-11-20 07:22:58.110998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.709 [2024-11-20 07:22:58.111416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.709 [2024-11-20 07:22:58.111433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.709 [2024-11-20 07:22:58.111439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.709 [2024-11-20 07:22:58.111602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.709 [2024-11-20 07:22:58.111765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.709 [2024-11-20 07:22:58.111772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.709 [2024-11-20 07:22:58.111779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.709 [2024-11-20 07:22:58.111785] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.709 [2024-11-20 07:22:58.123941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.709 [2024-11-20 07:22:58.124283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.709 [2024-11-20 07:22:58.124298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.709 [2024-11-20 07:22:58.124305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.709 [2024-11-20 07:22:58.124467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.709 [2024-11-20 07:22:58.124630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.709 [2024-11-20 07:22:58.124637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.709 [2024-11-20 07:22:58.124643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.709 [2024-11-20 07:22:58.124649] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.709 [2024-11-20 07:22:58.136846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.709 [2024-11-20 07:22:58.137305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.709 [2024-11-20 07:22:58.137350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.709 [2024-11-20 07:22:58.137372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.709 [2024-11-20 07:22:58.137967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.709 [2024-11-20 07:22:58.138559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.709 [2024-11-20 07:22:58.138583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.709 [2024-11-20 07:22:58.138604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.709 [2024-11-20 07:22:58.138634] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.709 [2024-11-20 07:22:58.149697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.709 [2024-11-20 07:22:58.150113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.709 [2024-11-20 07:22:58.150131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.709 [2024-11-20 07:22:58.150138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.709 [2024-11-20 07:22:58.150315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.709 [2024-11-20 07:22:58.150493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.709 [2024-11-20 07:22:58.150502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.709 [2024-11-20 07:22:58.150508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.709 [2024-11-20 07:22:58.150515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.709 [2024-11-20 07:22:58.162765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.709 [2024-11-20 07:22:58.163206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.709 [2024-11-20 07:22:58.163223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.709 [2024-11-20 07:22:58.163230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.709 [2024-11-20 07:22:58.163408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.709 [2024-11-20 07:22:58.163585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.709 [2024-11-20 07:22:58.163593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.709 [2024-11-20 07:22:58.163599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.709 [2024-11-20 07:22:58.163606] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.709 [2024-11-20 07:22:58.175743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.709 [2024-11-20 07:22:58.176185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.709 [2024-11-20 07:22:58.176230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.709 [2024-11-20 07:22:58.176253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.709 [2024-11-20 07:22:58.176785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.709 [2024-11-20 07:22:58.176963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.709 [2024-11-20 07:22:58.176972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.709 [2024-11-20 07:22:58.176982] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.709 [2024-11-20 07:22:58.176989] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.709 [2024-11-20 07:22:58.188643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.709 [2024-11-20 07:22:58.189065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.709 [2024-11-20 07:22:58.189111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.709 [2024-11-20 07:22:58.189133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.709 [2024-11-20 07:22:58.189713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.709 [2024-11-20 07:22:58.189956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.709 [2024-11-20 07:22:58.189964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.709 [2024-11-20 07:22:58.189970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.709 [2024-11-20 07:22:58.189993] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.709 [2024-11-20 07:22:58.201526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.709 [2024-11-20 07:22:58.201967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.709 [2024-11-20 07:22:58.202013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.709 [2024-11-20 07:22:58.202036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.710 [2024-11-20 07:22:58.202559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.710 [2024-11-20 07:22:58.202956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.710 [2024-11-20 07:22:58.202974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.710 [2024-11-20 07:22:58.202988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.710 [2024-11-20 07:22:58.203002] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.710 [2024-11-20 07:22:58.216407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.710 [2024-11-20 07:22:58.216940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.710 [2024-11-20 07:22:58.216995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.710 [2024-11-20 07:22:58.217018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.710 [2024-11-20 07:22:58.217600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.710 [2024-11-20 07:22:58.218103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.710 [2024-11-20 07:22:58.218115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.710 [2024-11-20 07:22:58.218124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.710 [2024-11-20 07:22:58.218133] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.710 [2024-11-20 07:22:58.229412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.710 [2024-11-20 07:22:58.229841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.710 [2024-11-20 07:22:58.229858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.710 [2024-11-20 07:22:58.229865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.710 [2024-11-20 07:22:58.230054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.710 [2024-11-20 07:22:58.230227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.710 [2024-11-20 07:22:58.230235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.710 [2024-11-20 07:22:58.230241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.710 [2024-11-20 07:22:58.230247] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.710 [2024-11-20 07:22:58.242278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.710 [2024-11-20 07:22:58.242724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.710 [2024-11-20 07:22:58.242768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.710 [2024-11-20 07:22:58.242791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.710 [2024-11-20 07:22:58.243385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.710 [2024-11-20 07:22:58.243899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.710 [2024-11-20 07:22:58.243907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.710 [2024-11-20 07:22:58.243913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.710 [2024-11-20 07:22:58.243920] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.710 [2024-11-20 07:22:58.255382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.971 [2024-11-20 07:22:58.255804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.971 [2024-11-20 07:22:58.255820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.971 [2024-11-20 07:22:58.255828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.971 [2024-11-20 07:22:58.256013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.971 [2024-11-20 07:22:58.256191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.971 [2024-11-20 07:22:58.256200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.971 [2024-11-20 07:22:58.256208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.971 [2024-11-20 07:22:58.256217] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.971 [2024-11-20 07:22:58.268263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.971 [2024-11-20 07:22:58.268698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.971 [2024-11-20 07:22:58.268742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.971 [2024-11-20 07:22:58.268774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.971 [2024-11-20 07:22:58.269369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.971 [2024-11-20 07:22:58.269961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.971 [2024-11-20 07:22:58.269984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.972 [2024-11-20 07:22:58.269990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.972 [2024-11-20 07:22:58.269997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.972 [2024-11-20 07:22:58.281202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.972 [2024-11-20 07:22:58.281622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.972 [2024-11-20 07:22:58.281637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.972 [2024-11-20 07:22:58.281644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.972 [2024-11-20 07:22:58.281807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.972 [2024-11-20 07:22:58.281975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.972 [2024-11-20 07:22:58.281984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.972 [2024-11-20 07:22:58.281990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.972 [2024-11-20 07:22:58.281996] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.972 [2024-11-20 07:22:58.294139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.972 [2024-11-20 07:22:58.294568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.972 [2024-11-20 07:22:58.294614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.972 [2024-11-20 07:22:58.294637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.972 [2024-11-20 07:22:58.295109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.972 [2024-11-20 07:22:58.295273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.972 [2024-11-20 07:22:58.295281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.972 [2024-11-20 07:22:58.295287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.972 [2024-11-20 07:22:58.295293] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.972 [2024-11-20 07:22:58.306960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.972 [2024-11-20 07:22:58.307384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.972 [2024-11-20 07:22:58.307400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.972 [2024-11-20 07:22:58.307407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.972 [2024-11-20 07:22:58.307569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.972 [2024-11-20 07:22:58.307735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.972 [2024-11-20 07:22:58.307743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.972 [2024-11-20 07:22:58.307749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.972 [2024-11-20 07:22:58.307755] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.972 [2024-11-20 07:22:58.319809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.972 [2024-11-20 07:22:58.320272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.972 [2024-11-20 07:22:58.320318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.972 [2024-11-20 07:22:58.320341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.972 [2024-11-20 07:22:58.320920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.972 [2024-11-20 07:22:58.321122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.972 [2024-11-20 07:22:58.321130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.972 [2024-11-20 07:22:58.321136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.972 [2024-11-20 07:22:58.321143] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.972 [2024-11-20 07:22:58.332706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.972 [2024-11-20 07:22:58.333053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.972 [2024-11-20 07:22:58.333070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.972 [2024-11-20 07:22:58.333077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.972 [2024-11-20 07:22:58.333240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.972 [2024-11-20 07:22:58.333402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.972 [2024-11-20 07:22:58.333410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.972 [2024-11-20 07:22:58.333416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.972 [2024-11-20 07:22:58.333422] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.972 [2024-11-20 07:22:58.345618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.972 [2024-11-20 07:22:58.345955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.972 [2024-11-20 07:22:58.345971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.972 [2024-11-20 07:22:58.345994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.972 [2024-11-20 07:22:58.346166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.972 [2024-11-20 07:22:58.346339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.972 [2024-11-20 07:22:58.346347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.972 [2024-11-20 07:22:58.346357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.972 [2024-11-20 07:22:58.346364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.972 [2024-11-20 07:22:58.358443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.972 [2024-11-20 07:22:58.358897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.972 [2024-11-20 07:22:58.358913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.972 [2024-11-20 07:22:58.358920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.972 [2024-11-20 07:22:58.359099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.972 [2024-11-20 07:22:58.359271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.972 [2024-11-20 07:22:58.359279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.972 [2024-11-20 07:22:58.359286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.972 [2024-11-20 07:22:58.359292] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.972 [2024-11-20 07:22:58.371334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.972 [2024-11-20 07:22:58.371756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.972 [2024-11-20 07:22:58.371771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.972 [2024-11-20 07:22:58.371778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.972 [2024-11-20 07:22:58.371939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.972 [2024-11-20 07:22:58.372133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.972 [2024-11-20 07:22:58.372141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.972 [2024-11-20 07:22:58.372148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.973 [2024-11-20 07:22:58.372154] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.973 [2024-11-20 07:22:58.384306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.973 [2024-11-20 07:22:58.384736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.973 [2024-11-20 07:22:58.384753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.973 [2024-11-20 07:22:58.384761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.973 [2024-11-20 07:22:58.384934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.973 [2024-11-20 07:22:58.385113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.973 [2024-11-20 07:22:58.385122] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.973 [2024-11-20 07:22:58.385129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.973 [2024-11-20 07:22:58.385135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.973 [2024-11-20 07:22:58.397224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.973 [2024-11-20 07:22:58.397625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.973 [2024-11-20 07:22:58.397641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.973 [2024-11-20 07:22:58.397648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.973 [2024-11-20 07:22:58.397811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.973 [2024-11-20 07:22:58.397995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.973 [2024-11-20 07:22:58.398004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.973 [2024-11-20 07:22:58.398011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.973 [2024-11-20 07:22:58.398017] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.973 [2024-11-20 07:22:58.410023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.973 [2024-11-20 07:22:58.410471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.973 [2024-11-20 07:22:58.410489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.973 [2024-11-20 07:22:58.410496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.973 [2024-11-20 07:22:58.410669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.973 [2024-11-20 07:22:58.410844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.973 [2024-11-20 07:22:58.410852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.973 [2024-11-20 07:22:58.410859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.973 [2024-11-20 07:22:58.410865] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.973 [2024-11-20 07:22:58.423074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.973 [2024-11-20 07:22:58.423501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.973 [2024-11-20 07:22:58.423545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.973 [2024-11-20 07:22:58.423568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.973 [2024-11-20 07:22:58.424150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.973 [2024-11-20 07:22:58.424323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.973 [2024-11-20 07:22:58.424331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.973 [2024-11-20 07:22:58.424337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.973 [2024-11-20 07:22:58.424343] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.973 [2024-11-20 07:22:58.436021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.973 [2024-11-20 07:22:58.436466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.973 [2024-11-20 07:22:58.436509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.973 [2024-11-20 07:22:58.436540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.973 [2024-11-20 07:22:58.437135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.973 [2024-11-20 07:22:58.437640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.973 [2024-11-20 07:22:58.437648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.973 [2024-11-20 07:22:58.437654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.973 [2024-11-20 07:22:58.437661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.973 [2024-11-20 07:22:58.448886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.973 [2024-11-20 07:22:58.449220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.973 [2024-11-20 07:22:58.449236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.973 [2024-11-20 07:22:58.449243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.973 [2024-11-20 07:22:58.449405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.973 [2024-11-20 07:22:58.449567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.973 [2024-11-20 07:22:58.449575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.973 [2024-11-20 07:22:58.449581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.973 [2024-11-20 07:22:58.449587] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.973 [2024-11-20 07:22:58.461846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.973 [2024-11-20 07:22:58.462287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.973 [2024-11-20 07:22:58.462304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.973 [2024-11-20 07:22:58.462311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.973 [2024-11-20 07:22:58.462482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.973 [2024-11-20 07:22:58.462654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.973 [2024-11-20 07:22:58.462663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.973 [2024-11-20 07:22:58.462669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.973 [2024-11-20 07:22:58.462675] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.973 [2024-11-20 07:22:58.474677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.973 [2024-11-20 07:22:58.475050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.973 [2024-11-20 07:22:58.475067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.973 [2024-11-20 07:22:58.475074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.973 [2024-11-20 07:22:58.475242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.973 [2024-11-20 07:22:58.475409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.973 [2024-11-20 07:22:58.475417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.973 [2024-11-20 07:22:58.475423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.973 [2024-11-20 07:22:58.475429] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.974 [2024-11-20 07:22:58.487555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.974 [2024-11-20 07:22:58.487955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.974 [2024-11-20 07:22:58.487971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.974 [2024-11-20 07:22:58.487978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.974 [2024-11-20 07:22:58.488141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.974 [2024-11-20 07:22:58.488303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.974 [2024-11-20 07:22:58.488311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.974 [2024-11-20 07:22:58.488317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.974 [2024-11-20 07:22:58.488323] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.974 [2024-11-20 07:22:58.500403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.974 [2024-11-20 07:22:58.500809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.974 [2024-11-20 07:22:58.500827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.974 [2024-11-20 07:22:58.500834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.974 [2024-11-20 07:22:58.501026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.974 [2024-11-20 07:22:58.501200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.974 [2024-11-20 07:22:58.501208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.974 [2024-11-20 07:22:58.501215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.974 [2024-11-20 07:22:58.501221] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:53.974 [2024-11-20 07:22:58.513212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:53.974 [2024-11-20 07:22:58.513644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.974 [2024-11-20 07:22:58.513690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:53.974 [2024-11-20 07:22:58.513714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:53.974 [2024-11-20 07:22:58.514308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:53.974 [2024-11-20 07:22:58.514717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:53.974 [2024-11-20 07:22:58.514726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:53.974 [2024-11-20 07:22:58.514737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:53.974 [2024-11-20 07:22:58.514744] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.235 [2024-11-20 07:22:58.526118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.235 [2024-11-20 07:22:58.526534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.235 [2024-11-20 07:22:58.526552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.235 [2024-11-20 07:22:58.526559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.235 [2024-11-20 07:22:58.526732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.235 [2024-11-20 07:22:58.526927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.235 [2024-11-20 07:22:58.526935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.235 [2024-11-20 07:22:58.526942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.235 [2024-11-20 07:22:58.526956] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.235 [2024-11-20 07:22:58.538995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.235 [2024-11-20 07:22:58.539412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.235 [2024-11-20 07:22:58.539428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.235 [2024-11-20 07:22:58.539435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.235 [2024-11-20 07:22:58.539597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.235 [2024-11-20 07:22:58.539760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.235 [2024-11-20 07:22:58.539768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.235 [2024-11-20 07:22:58.539774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.235 [2024-11-20 07:22:58.539780] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.235 [2024-11-20 07:22:58.551842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.235 [2024-11-20 07:22:58.552264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.235 [2024-11-20 07:22:58.552321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.235 [2024-11-20 07:22:58.552343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.235 [2024-11-20 07:22:58.552878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.235 [2024-11-20 07:22:58.553055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.235 [2024-11-20 07:22:58.553064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.235 [2024-11-20 07:22:58.553071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.235 [2024-11-20 07:22:58.553077] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.235 [2024-11-20 07:22:58.564701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.235 [2024-11-20 07:22:58.565125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.235 [2024-11-20 07:22:58.565142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.235 [2024-11-20 07:22:58.565148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.236 [2024-11-20 07:22:58.565311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.236 [2024-11-20 07:22:58.565475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.236 [2024-11-20 07:22:58.565482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.236 [2024-11-20 07:22:58.565488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.236 [2024-11-20 07:22:58.565495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.236 [2024-11-20 07:22:58.577640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.236 [2024-11-20 07:22:58.578061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.236 [2024-11-20 07:22:58.578078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.236 [2024-11-20 07:22:58.578085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.236 [2024-11-20 07:22:58.578258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.236 [2024-11-20 07:22:58.578434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.236 [2024-11-20 07:22:58.578443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.236 [2024-11-20 07:22:58.578449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.236 [2024-11-20 07:22:58.578455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.236 [2024-11-20 07:22:58.590539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.236 [2024-11-20 07:22:58.590958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.236 [2024-11-20 07:22:58.591003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.236 [2024-11-20 07:22:58.591026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.236 [2024-11-20 07:22:58.591508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.236 [2024-11-20 07:22:58.591681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.236 [2024-11-20 07:22:58.591689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.236 [2024-11-20 07:22:58.591695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.236 [2024-11-20 07:22:58.591702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.236 [2024-11-20 07:22:58.603385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.236 [2024-11-20 07:22:58.603805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.236 [2024-11-20 07:22:58.603821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.236 [2024-11-20 07:22:58.603831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.236 [2024-11-20 07:22:58.604011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.236 [2024-11-20 07:22:58.604185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.236 [2024-11-20 07:22:58.604193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.236 [2024-11-20 07:22:58.604199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.236 [2024-11-20 07:22:58.604206] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.236 [2024-11-20 07:22:58.616201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.236 [2024-11-20 07:22:58.616625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.236 [2024-11-20 07:22:58.616670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.236 [2024-11-20 07:22:58.616692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.236 [2024-11-20 07:22:58.617287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.236 [2024-11-20 07:22:58.617810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.236 [2024-11-20 07:22:58.617820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.236 [2024-11-20 07:22:58.617830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.236 [2024-11-20 07:22:58.617838] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.236 [2024-11-20 07:22:58.629141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.236 [2024-11-20 07:22:58.629571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.236 [2024-11-20 07:22:58.629617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.236 [2024-11-20 07:22:58.629640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.236 [2024-11-20 07:22:58.630230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.236 [2024-11-20 07:22:58.630825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.236 [2024-11-20 07:22:58.630833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.236 [2024-11-20 07:22:58.630840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.236 [2024-11-20 07:22:58.630846] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.236 [2024-11-20 07:22:58.642087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.236 [2024-11-20 07:22:58.642513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.236 [2024-11-20 07:22:58.642530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.236 [2024-11-20 07:22:58.642537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.236 [2024-11-20 07:22:58.642708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.236 [2024-11-20 07:22:58.642885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.236 [2024-11-20 07:22:58.642894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.236 [2024-11-20 07:22:58.642900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.236 [2024-11-20 07:22:58.642906] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.236 [2024-11-20 07:22:58.655062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.236 [2024-11-20 07:22:58.655458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.236 [2024-11-20 07:22:58.655474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.236 [2024-11-20 07:22:58.655481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.236 [2024-11-20 07:22:58.655643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.237 [2024-11-20 07:22:58.655805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.237 [2024-11-20 07:22:58.655813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.237 [2024-11-20 07:22:58.655819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.237 [2024-11-20 07:22:58.655825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.237 [2024-11-20 07:22:58.668001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.237 [2024-11-20 07:22:58.668360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.237 [2024-11-20 07:22:58.668377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.237 [2024-11-20 07:22:58.668384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.237 [2024-11-20 07:22:58.668560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.237 [2024-11-20 07:22:58.668737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.237 [2024-11-20 07:22:58.668745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.237 [2024-11-20 07:22:58.668751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.237 [2024-11-20 07:22:58.668758] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.237 5966.00 IOPS, 23.30 MiB/s [2024-11-20T06:22:58.793Z] [2024-11-20 07:22:58.682504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.237 [2024-11-20 07:22:58.682943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.237 [2024-11-20 07:22:58.682967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.237 [2024-11-20 07:22:58.682974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.237 [2024-11-20 07:22:58.683163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.237 [2024-11-20 07:22:58.683337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.237 [2024-11-20 07:22:58.683345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.237 [2024-11-20 07:22:58.683355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.237 [2024-11-20 07:22:58.683361] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.237 [2024-11-20 07:22:58.695471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.237 [2024-11-20 07:22:58.695823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.237 [2024-11-20 07:22:58.695840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.237 [2024-11-20 07:22:58.695847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.237 [2024-11-20 07:22:58.696027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.237 [2024-11-20 07:22:58.696199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.237 [2024-11-20 07:22:58.696207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.237 [2024-11-20 07:22:58.696214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.237 [2024-11-20 07:22:58.696221] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.237 [2024-11-20 07:22:58.708621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.237 [2024-11-20 07:22:58.709027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.237 [2024-11-20 07:22:58.709044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.237 [2024-11-20 07:22:58.709051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.237 [2024-11-20 07:22:58.709224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.237 [2024-11-20 07:22:58.709396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.237 [2024-11-20 07:22:58.709405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.237 [2024-11-20 07:22:58.709412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.237 [2024-11-20 07:22:58.709418] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.237 [2024-11-20 07:22:58.721571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.237 [2024-11-20 07:22:58.721901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.237 [2024-11-20 07:22:58.721918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.237 [2024-11-20 07:22:58.721926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.237 [2024-11-20 07:22:58.722103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.237 [2024-11-20 07:22:58.722277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.237 [2024-11-20 07:22:58.722285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.237 [2024-11-20 07:22:58.722292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.237 [2024-11-20 07:22:58.722298] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.237 [2024-11-20 07:22:58.734804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.237 [2024-11-20 07:22:58.735169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.237 [2024-11-20 07:22:58.735188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.237 [2024-11-20 07:22:58.735195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.237 [2024-11-20 07:22:58.735378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.237 [2024-11-20 07:22:58.735551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.237 [2024-11-20 07:22:58.735559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.237 [2024-11-20 07:22:58.735566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.237 [2024-11-20 07:22:58.735572] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.237 [2024-11-20 07:22:58.747706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.237 [2024-11-20 07:22:58.748103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.237 [2024-11-20 07:22:58.748121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.237 [2024-11-20 07:22:58.748128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.237 [2024-11-20 07:22:58.748301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.237 [2024-11-20 07:22:58.748474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.237 [2024-11-20 07:22:58.748482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.237 [2024-11-20 07:22:58.748488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.237 [2024-11-20 07:22:58.748494] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.237 [2024-11-20 07:22:58.760899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.237 [2024-11-20 07:22:58.761258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.238 [2024-11-20 07:22:58.761303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.238 [2024-11-20 07:22:58.761326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.238 [2024-11-20 07:22:58.761889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.238 [2024-11-20 07:22:58.762072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.238 [2024-11-20 07:22:58.762081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.238 [2024-11-20 07:22:58.762087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.238 [2024-11-20 07:22:58.762094] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.238 [2024-11-20 07:22:58.773838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.238 [2024-11-20 07:22:58.774283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.238 [2024-11-20 07:22:58.774300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.238 [2024-11-20 07:22:58.774310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.238 [2024-11-20 07:22:58.774484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.238 [2024-11-20 07:22:58.774657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.238 [2024-11-20 07:22:58.774665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.238 [2024-11-20 07:22:58.774671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.238 [2024-11-20 07:22:58.774677] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.498 [2024-11-20 07:22:58.786853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.498 [2024-11-20 07:22:58.787173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.498 [2024-11-20 07:22:58.787189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.498 [2024-11-20 07:22:58.787196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.498 [2024-11-20 07:22:58.787368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.498 [2024-11-20 07:22:58.787540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.498 [2024-11-20 07:22:58.787548] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.498 [2024-11-20 07:22:58.787555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.498 [2024-11-20 07:22:58.787561] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.498 [2024-11-20 07:22:58.799863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.498 [2024-11-20 07:22:58.800202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.498 [2024-11-20 07:22:58.800260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.498 [2024-11-20 07:22:58.800284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.498 [2024-11-20 07:22:58.800863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.498 [2024-11-20 07:22:58.801371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.498 [2024-11-20 07:22:58.801380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.498 [2024-11-20 07:22:58.801387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.498 [2024-11-20 07:22:58.801394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.498 [2024-11-20 07:22:58.812867] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.498 [2024-11-20 07:22:58.813211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.499 [2024-11-20 07:22:58.813228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.499 [2024-11-20 07:22:58.813235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.499 [2024-11-20 07:22:58.813408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.499 [2024-11-20 07:22:58.813584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.499 [2024-11-20 07:22:58.813592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.499 [2024-11-20 07:22:58.813599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.499 [2024-11-20 07:22:58.813606] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.499 [2024-11-20 07:22:58.825876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.499 [2024-11-20 07:22:58.826298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.499 [2024-11-20 07:22:58.826315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.499 [2024-11-20 07:22:58.826322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.499 [2024-11-20 07:22:58.826493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.499 [2024-11-20 07:22:58.826665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.499 [2024-11-20 07:22:58.826673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.499 [2024-11-20 07:22:58.826679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.499 [2024-11-20 07:22:58.826685] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.499 [2024-11-20 07:22:58.838897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.499 [2024-11-20 07:22:58.839250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.499 [2024-11-20 07:22:58.839266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.499 [2024-11-20 07:22:58.839273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.499 [2024-11-20 07:22:58.839445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.499 [2024-11-20 07:22:58.839619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.499 [2024-11-20 07:22:58.839628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.499 [2024-11-20 07:22:58.839634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.499 [2024-11-20 07:22:58.839640] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.499 [2024-11-20 07:22:58.851869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.499 [2024-11-20 07:22:58.852227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.499 [2024-11-20 07:22:58.852244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.499 [2024-11-20 07:22:58.852251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.499 [2024-11-20 07:22:58.852423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.499 [2024-11-20 07:22:58.852595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.499 [2024-11-20 07:22:58.852604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.499 [2024-11-20 07:22:58.852613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.499 [2024-11-20 07:22:58.852620] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.499 [2024-11-20 07:22:58.864973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.499 [2024-11-20 07:22:58.865368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.499 [2024-11-20 07:22:58.865384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.499 [2024-11-20 07:22:58.865391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.499 [2024-11-20 07:22:58.865563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.499 [2024-11-20 07:22:58.865734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.499 [2024-11-20 07:22:58.865742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.499 [2024-11-20 07:22:58.865749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.499 [2024-11-20 07:22:58.865755] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.499 [2024-11-20 07:22:58.877930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.499 [2024-11-20 07:22:58.878303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.499 [2024-11-20 07:22:58.878320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.499 [2024-11-20 07:22:58.878327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.499 [2024-11-20 07:22:58.878498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.499 [2024-11-20 07:22:58.878671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.499 [2024-11-20 07:22:58.878680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.499 [2024-11-20 07:22:58.878687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.499 [2024-11-20 07:22:58.878694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.499 [2024-11-20 07:22:58.890917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.499 [2024-11-20 07:22:58.891363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.499 [2024-11-20 07:22:58.891404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.499 [2024-11-20 07:22:58.891429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.499 [2024-11-20 07:22:58.892021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.499 [2024-11-20 07:22:58.892556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.499 [2024-11-20 07:22:58.892564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.499 [2024-11-20 07:22:58.892571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.499 [2024-11-20 07:22:58.892577] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.499 [2024-11-20 07:22:58.903937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.499 [2024-11-20 07:22:58.904314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.499 [2024-11-20 07:22:58.904330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.499 [2024-11-20 07:22:58.904337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.499 [2024-11-20 07:22:58.904510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.500 [2024-11-20 07:22:58.904682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.500 [2024-11-20 07:22:58.904690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.500 [2024-11-20 07:22:58.904696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.500 [2024-11-20 07:22:58.904703] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.500 [2024-11-20 07:22:58.916855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.500 [2024-11-20 07:22:58.917262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.500 [2024-11-20 07:22:58.917307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.500 [2024-11-20 07:22:58.917330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.500 [2024-11-20 07:22:58.917910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.500 [2024-11-20 07:22:58.918526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.500 [2024-11-20 07:22:58.918538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.500 [2024-11-20 07:22:58.918544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.500 [2024-11-20 07:22:58.918551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.500 [2024-11-20 07:22:58.929972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.500 [2024-11-20 07:22:58.930335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.500 [2024-11-20 07:22:58.930352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.500 [2024-11-20 07:22:58.930360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.500 [2024-11-20 07:22:58.930537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.500 [2024-11-20 07:22:58.930715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.500 [2024-11-20 07:22:58.930724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.500 [2024-11-20 07:22:58.930730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.500 [2024-11-20 07:22:58.930737] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.500 [2024-11-20 07:22:58.942904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.500 [2024-11-20 07:22:58.943272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.500 [2024-11-20 07:22:58.943290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.500 [2024-11-20 07:22:58.943300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.500 [2024-11-20 07:22:58.943472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.500 [2024-11-20 07:22:58.943645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.500 [2024-11-20 07:22:58.943654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.500 [2024-11-20 07:22:58.943660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.500 [2024-11-20 07:22:58.943666] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.500 [2024-11-20 07:22:58.955812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.500 [2024-11-20 07:22:58.956106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.500 [2024-11-20 07:22:58.956123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.500 [2024-11-20 07:22:58.956130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.500 [2024-11-20 07:22:58.956302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.500 [2024-11-20 07:22:58.956474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.500 [2024-11-20 07:22:58.956482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.500 [2024-11-20 07:22:58.956489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.500 [2024-11-20 07:22:58.956495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.500 [2024-11-20 07:22:58.968852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.500 [2024-11-20 07:22:58.969316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.500 [2024-11-20 07:22:58.969334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.500 [2024-11-20 07:22:58.969341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.500 [2024-11-20 07:22:58.969513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.500 [2024-11-20 07:22:58.969686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.500 [2024-11-20 07:22:58.969694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.500 [2024-11-20 07:22:58.969702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.500 [2024-11-20 07:22:58.969708] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.500 [2024-11-20 07:22:58.981816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.500 [2024-11-20 07:22:58.982125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.500 [2024-11-20 07:22:58.982142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.500 [2024-11-20 07:22:58.982149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.500 [2024-11-20 07:22:58.982322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.500 [2024-11-20 07:22:58.982497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.500 [2024-11-20 07:22:58.982505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.500 [2024-11-20 07:22:58.982511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.500 [2024-11-20 07:22:58.982517] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.500 [2024-11-20 07:22:58.994843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.500 [2024-11-20 07:22:58.995210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.501 [2024-11-20 07:22:58.995227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.501 [2024-11-20 07:22:58.995234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.501 [2024-11-20 07:22:58.995405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.501 [2024-11-20 07:22:58.995578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.501 [2024-11-20 07:22:58.995586] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.501 [2024-11-20 07:22:58.995592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.501 [2024-11-20 07:22:58.995599] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.501 [2024-11-20 07:22:59.007895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.501 [2024-11-20 07:22:59.008249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.501 [2024-11-20 07:22:59.008265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.501 [2024-11-20 07:22:59.008272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.501 [2024-11-20 07:22:59.008445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.501 [2024-11-20 07:22:59.008618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.501 [2024-11-20 07:22:59.008627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.501 [2024-11-20 07:22:59.008633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.501 [2024-11-20 07:22:59.008639] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.501 [2024-11-20 07:22:59.020747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.501 [2024-11-20 07:22:59.021172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.501 [2024-11-20 07:22:59.021190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.501 [2024-11-20 07:22:59.021197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.501 [2024-11-20 07:22:59.021369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.501 [2024-11-20 07:22:59.021541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.501 [2024-11-20 07:22:59.021550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.501 [2024-11-20 07:22:59.021560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.501 [2024-11-20 07:22:59.021567] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.501 [2024-11-20 07:22:59.033782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.501 [2024-11-20 07:22:59.034157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.501 [2024-11-20 07:22:59.034174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.501 [2024-11-20 07:22:59.034181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.501 [2024-11-20 07:22:59.034353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.501 [2024-11-20 07:22:59.034529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.501 [2024-11-20 07:22:59.034538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.501 [2024-11-20 07:22:59.034544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.501 [2024-11-20 07:22:59.034550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.501 [2024-11-20 07:22:59.046821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.762 [2024-11-20 07:22:59.047263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-11-20 07:22:59.047281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.762 [2024-11-20 07:22:59.047288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.762 [2024-11-20 07:22:59.047467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.762 [2024-11-20 07:22:59.047646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.762 [2024-11-20 07:22:59.047657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.762 [2024-11-20 07:22:59.047664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.762 [2024-11-20 07:22:59.047671] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.762 [2024-11-20 07:22:59.059783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.762 [2024-11-20 07:22:59.060117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-11-20 07:22:59.060134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.762 [2024-11-20 07:22:59.060141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.762 [2024-11-20 07:22:59.060313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.762 [2024-11-20 07:22:59.060484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.762 [2024-11-20 07:22:59.060493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.762 [2024-11-20 07:22:59.060499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.762 [2024-11-20 07:22:59.060506] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.762 [2024-11-20 07:22:59.072765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.762 [2024-11-20 07:22:59.073188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-11-20 07:22:59.073205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.762 [2024-11-20 07:22:59.073213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.762 [2024-11-20 07:22:59.073385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.762 [2024-11-20 07:22:59.073557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.762 [2024-11-20 07:22:59.073566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.762 [2024-11-20 07:22:59.073572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.762 [2024-11-20 07:22:59.073578] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.762 [2024-11-20 07:22:59.085681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.762 [2024-11-20 07:22:59.086080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-11-20 07:22:59.086098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.762 [2024-11-20 07:22:59.086105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.762 [2024-11-20 07:22:59.086281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.762 [2024-11-20 07:22:59.086445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.762 [2024-11-20 07:22:59.086453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.762 [2024-11-20 07:22:59.086459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.762 [2024-11-20 07:22:59.086465] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.762 [2024-11-20 07:22:59.098581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.762 [2024-11-20 07:22:59.098919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-11-20 07:22:59.098935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.762 [2024-11-20 07:22:59.098942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.762 [2024-11-20 07:22:59.099120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.762 [2024-11-20 07:22:59.099293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.762 [2024-11-20 07:22:59.099302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.762 [2024-11-20 07:22:59.099308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.762 [2024-11-20 07:22:59.099314] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.762 [2024-11-20 07:22:59.111614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.762 [2024-11-20 07:22:59.112080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.762 [2024-11-20 07:22:59.112127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.763 [2024-11-20 07:22:59.112158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.763 [2024-11-20 07:22:59.112737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.763 [2024-11-20 07:22:59.112952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.763 [2024-11-20 07:22:59.112960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.763 [2024-11-20 07:22:59.112967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.763 [2024-11-20 07:22:59.112973] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.763 [2024-11-20 07:22:59.124554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.763 [2024-11-20 07:22:59.124959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-11-20 07:22:59.124976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.763 [2024-11-20 07:22:59.124983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.763 [2024-11-20 07:22:59.125155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.763 [2024-11-20 07:22:59.125328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.763 [2024-11-20 07:22:59.125336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.763 [2024-11-20 07:22:59.125342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.763 [2024-11-20 07:22:59.125349] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.763 [2024-11-20 07:22:59.137379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.763 [2024-11-20 07:22:59.137770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-11-20 07:22:59.137819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.763 [2024-11-20 07:22:59.137841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.763 [2024-11-20 07:22:59.138431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.763 [2024-11-20 07:22:59.139022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.763 [2024-11-20 07:22:59.139048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.763 [2024-11-20 07:22:59.139068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.763 [2024-11-20 07:22:59.139089] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.763 [2024-11-20 07:22:59.150264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.763 [2024-11-20 07:22:59.150633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-11-20 07:22:59.150649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.763 [2024-11-20 07:22:59.150656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.763 [2024-11-20 07:22:59.150817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.763 [2024-11-20 07:22:59.150987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.763 [2024-11-20 07:22:59.150996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.763 [2024-11-20 07:22:59.151002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.763 [2024-11-20 07:22:59.151008] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.763 [2024-11-20 07:22:59.163249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.763 [2024-11-20 07:22:59.163674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-11-20 07:22:59.163718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.763 [2024-11-20 07:22:59.163741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.763 [2024-11-20 07:22:59.164208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.763 [2024-11-20 07:22:59.164381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.763 [2024-11-20 07:22:59.164389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.763 [2024-11-20 07:22:59.164395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.763 [2024-11-20 07:22:59.164402] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.763 [2024-11-20 07:22:59.176127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.763 [2024-11-20 07:22:59.176465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-11-20 07:22:59.176482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.763 [2024-11-20 07:22:59.176489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.763 [2024-11-20 07:22:59.176662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.763 [2024-11-20 07:22:59.176835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.763 [2024-11-20 07:22:59.176842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.763 [2024-11-20 07:22:59.176849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.763 [2024-11-20 07:22:59.176855] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.763 [2024-11-20 07:22:59.189297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.763 [2024-11-20 07:22:59.189691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-11-20 07:22:59.189709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.763 [2024-11-20 07:22:59.189716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.763 [2024-11-20 07:22:59.189893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.763 [2024-11-20 07:22:59.190076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.763 [2024-11-20 07:22:59.190086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.763 [2024-11-20 07:22:59.190095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.763 [2024-11-20 07:22:59.190102] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.763 [2024-11-20 07:22:59.202403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.763 [2024-11-20 07:22:59.202815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.763 [2024-11-20 07:22:59.202831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.763 [2024-11-20 07:22:59.202838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.763 [2024-11-20 07:22:59.203016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.763 [2024-11-20 07:22:59.203190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.763 [2024-11-20 07:22:59.203197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.763 [2024-11-20 07:22:59.203204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.763 [2024-11-20 07:22:59.203211] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.763 [2024-11-20 07:22:59.215268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.763 [2024-11-20 07:22:59.215668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-11-20 07:22:59.215684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.764 [2024-11-20 07:22:59.215691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.764 [2024-11-20 07:22:59.215854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.764 [2024-11-20 07:22:59.216041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.764 [2024-11-20 07:22:59.216050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.764 [2024-11-20 07:22:59.216056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.764 [2024-11-20 07:22:59.216063] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.764 [2024-11-20 07:22:59.228060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.764 [2024-11-20 07:22:59.228407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-11-20 07:22:59.228424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.764 [2024-11-20 07:22:59.228430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.764 [2024-11-20 07:22:59.228594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.764 [2024-11-20 07:22:59.228755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.764 [2024-11-20 07:22:59.228763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.764 [2024-11-20 07:22:59.228770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.764 [2024-11-20 07:22:59.228776] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.764 [2024-11-20 07:22:59.241000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.764 [2024-11-20 07:22:59.241419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-11-20 07:22:59.241436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.764 [2024-11-20 07:22:59.241443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.764 [2024-11-20 07:22:59.241615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.764 [2024-11-20 07:22:59.241791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.764 [2024-11-20 07:22:59.241800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.764 [2024-11-20 07:22:59.241806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.764 [2024-11-20 07:22:59.241812] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.764 [2024-11-20 07:22:59.253815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.764 [2024-11-20 07:22:59.254228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-11-20 07:22:59.254244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.764 [2024-11-20 07:22:59.254251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.764 [2024-11-20 07:22:59.254422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.764 [2024-11-20 07:22:59.254594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.764 [2024-11-20 07:22:59.254602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.764 [2024-11-20 07:22:59.254608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.764 [2024-11-20 07:22:59.254615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.764 [2024-11-20 07:22:59.266624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.764 [2024-11-20 07:22:59.266953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-11-20 07:22:59.267007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.764 [2024-11-20 07:22:59.267030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.764 [2024-11-20 07:22:59.267551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.764 [2024-11-20 07:22:59.267714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.764 [2024-11-20 07:22:59.267722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.764 [2024-11-20 07:22:59.267728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.764 [2024-11-20 07:22:59.267735] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.764 [2024-11-20 07:22:59.279539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.764 [2024-11-20 07:22:59.279935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-11-20 07:22:59.279955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.764 [2024-11-20 07:22:59.279965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.764 [2024-11-20 07:22:59.280127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.764 [2024-11-20 07:22:59.280289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.764 [2024-11-20 07:22:59.280297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.764 [2024-11-20 07:22:59.280302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.764 [2024-11-20 07:22:59.280308] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.764 [2024-11-20 07:22:59.292402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.764 [2024-11-20 07:22:59.292802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-11-20 07:22:59.292818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.764 [2024-11-20 07:22:59.292825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.764 [2024-11-20 07:22:59.293009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.764 [2024-11-20 07:22:59.293183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.764 [2024-11-20 07:22:59.293191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.764 [2024-11-20 07:22:59.293197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.764 [2024-11-20 07:22:59.293203] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:54.764 [2024-11-20 07:22:59.305200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:54.764 [2024-11-20 07:22:59.305618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.764 [2024-11-20 07:22:59.305674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:54.764 [2024-11-20 07:22:59.305697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:54.764 [2024-11-20 07:22:59.306292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:54.764 [2024-11-20 07:22:59.306537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:54.764 [2024-11-20 07:22:59.306546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:54.764 [2024-11-20 07:22:59.306553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:54.764 [2024-11-20 07:22:59.306559] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.025 [2024-11-20 07:22:59.318216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.025 [2024-11-20 07:22:59.318636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-11-20 07:22:59.318653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:55.025 [2024-11-20 07:22:59.318660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:55.025 [2024-11-20 07:22:59.318832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:55.025 [2024-11-20 07:22:59.319014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.025 [2024-11-20 07:22:59.319023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.025 [2024-11-20 07:22:59.319029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.025 [2024-11-20 07:22:59.319036] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.025 [2024-11-20 07:22:59.331030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.025 [2024-11-20 07:22:59.331429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-11-20 07:22:59.331446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:55.025 [2024-11-20 07:22:59.331453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:55.025 [2024-11-20 07:22:59.331626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:55.025 [2024-11-20 07:22:59.331799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.025 [2024-11-20 07:22:59.331807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.025 [2024-11-20 07:22:59.331813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.025 [2024-11-20 07:22:59.331820] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.025 [2024-11-20 07:22:59.343957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.025 [2024-11-20 07:22:59.344375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-11-20 07:22:59.344392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:55.025 [2024-11-20 07:22:59.344399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:55.025 [2024-11-20 07:22:59.344571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:55.025 [2024-11-20 07:22:59.344743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.025 [2024-11-20 07:22:59.344751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.025 [2024-11-20 07:22:59.344757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.025 [2024-11-20 07:22:59.344763] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.025 [2024-11-20 07:22:59.356833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.025 [2024-11-20 07:22:59.357253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-11-20 07:22:59.357269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:55.025 [2024-11-20 07:22:59.357277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:55.025 [2024-11-20 07:22:59.357470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:55.025 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1344537 Killed "${NVMF_APP[@]}" "$@" 00:26:55.025 [2024-11-20 07:22:59.357648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.025 [2024-11-20 07:22:59.357662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.025 [2024-11-20 07:22:59.357669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.025 [2024-11-20 07:22:59.357676] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.025 07:22:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:55.025 07:22:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:55.025 07:22:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:55.025 07:22:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:55.025 07:22:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:55.025 07:22:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1345935 00:26:55.025 07:22:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:55.025 07:22:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1345935 00:26:55.025 07:22:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 1345935 ']' 00:26:55.025 07:22:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:55.025 07:22:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:55.025 07:22:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:55.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:55.025 07:22:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:55.025 07:22:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:55.025 [2024-11-20 07:22:59.369900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.025 [2024-11-20 07:22:59.370334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-11-20 07:22:59.370352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:55.025 [2024-11-20 07:22:59.370359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:55.025 [2024-11-20 07:22:59.370537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:55.025 [2024-11-20 07:22:59.370717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.025 [2024-11-20 07:22:59.370726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.026 [2024-11-20 07:22:59.370734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.026 [2024-11-20 07:22:59.370741] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.026 [2024-11-20 07:22:59.382957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.026 [2024-11-20 07:22:59.383312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-11-20 07:22:59.383329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:55.026 [2024-11-20 07:22:59.383336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:55.026 [2024-11-20 07:22:59.383514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:55.026 [2024-11-20 07:22:59.383692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.026 [2024-11-20 07:22:59.383703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.026 [2024-11-20 07:22:59.383710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.026 [2024-11-20 07:22:59.383717] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.026 [2024-11-20 07:22:59.396109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.026 [2024-11-20 07:22:59.396546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-11-20 07:22:59.396563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:55.026 [2024-11-20 07:22:59.396571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:55.026 [2024-11-20 07:22:59.396747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:55.026 [2024-11-20 07:22:59.396924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.026 [2024-11-20 07:22:59.396933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.026 [2024-11-20 07:22:59.396940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.026 [2024-11-20 07:22:59.396953] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.026 [2024-11-20 07:22:59.409061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.026 [2024-11-20 07:22:59.409468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-11-20 07:22:59.409485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:55.026 [2024-11-20 07:22:59.409492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:55.026 [2024-11-20 07:22:59.409665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:55.026 [2024-11-20 07:22:59.409842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.026 [2024-11-20 07:22:59.409850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.026 [2024-11-20 07:22:59.409856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.026 [2024-11-20 07:22:59.409863] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.026 [2024-11-20 07:22:59.414853] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:26:55.026 [2024-11-20 07:22:59.414891] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:55.026 [2024-11-20 07:22:59.422128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.026 [2024-11-20 07:22:59.422559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-11-20 07:22:59.422575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:55.026 [2024-11-20 07:22:59.422583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:55.026 [2024-11-20 07:22:59.422755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:55.026 [2024-11-20 07:22:59.422931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.026 [2024-11-20 07:22:59.422940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.026 [2024-11-20 07:22:59.422952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.026 [2024-11-20 07:22:59.422959] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.026 [2024-11-20 07:22:59.435200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.026 [2024-11-20 07:22:59.435564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-11-20 07:22:59.435581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:55.026 [2024-11-20 07:22:59.435589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:55.026 [2024-11-20 07:22:59.435767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:55.026 [2024-11-20 07:22:59.435945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.026 [2024-11-20 07:22:59.435960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.026 [2024-11-20 07:22:59.435967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.026 [2024-11-20 07:22:59.435974] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.026 [2024-11-20 07:22:59.448366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.026 [2024-11-20 07:22:59.448803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-11-20 07:22:59.448820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:55.026 [2024-11-20 07:22:59.448827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:55.026 [2024-11-20 07:22:59.449011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:55.026 [2024-11-20 07:22:59.449190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.026 [2024-11-20 07:22:59.449199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.026 [2024-11-20 07:22:59.449206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.026 [2024-11-20 07:22:59.449213] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.026 [2024-11-20 07:22:59.461537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.026 [2024-11-20 07:22:59.461994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-11-20 07:22:59.462011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:55.026 [2024-11-20 07:22:59.462020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:55.026 [2024-11-20 07:22:59.462197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:55.026 [2024-11-20 07:22:59.462375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.026 [2024-11-20 07:22:59.462383] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.026 [2024-11-20 07:22:59.462390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.026 [2024-11-20 07:22:59.462400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.026 [2024-11-20 07:22:59.474502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.026 [2024-11-20 07:22:59.474852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-11-20 07:22:59.474868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:55.026 [2024-11-20 07:22:59.474876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:55.026 [2024-11-20 07:22:59.475053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:55.026 [2024-11-20 07:22:59.475226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.027 [2024-11-20 07:22:59.475235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.027 [2024-11-20 07:22:59.475242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.027 [2024-11-20 07:22:59.475248] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.027 [2024-11-20 07:22:59.487595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.027 [2024-11-20 07:22:59.488006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-11-20 07:22:59.488025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:55.027 [2024-11-20 07:22:59.488033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:55.027 [2024-11-20 07:22:59.488207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:55.027 [2024-11-20 07:22:59.488381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.027 [2024-11-20 07:22:59.488390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.027 [2024-11-20 07:22:59.488396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.027 [2024-11-20 07:22:59.488403] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.027 [2024-11-20 07:22:59.494647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:55.027 [2024-11-20 07:22:59.500590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.027 [2024-11-20 07:22:59.501028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-11-20 07:22:59.501048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:55.027 [2024-11-20 07:22:59.501057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:55.027 [2024-11-20 07:22:59.501233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:55.027 [2024-11-20 07:22:59.501408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.027 [2024-11-20 07:22:59.501417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.027 [2024-11-20 07:22:59.501424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.027 [2024-11-20 07:22:59.501431] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.027 [2024-11-20 07:22:59.513667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.027 [2024-11-20 07:22:59.513944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-11-20 07:22:59.513968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:55.027 [2024-11-20 07:22:59.513975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:55.027 [2024-11-20 07:22:59.514149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:55.027 [2024-11-20 07:22:59.514323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.027 [2024-11-20 07:22:59.514332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.027 [2024-11-20 07:22:59.514339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.027 [2024-11-20 07:22:59.514347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.027 [2024-11-20 07:22:59.526687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.027 [2024-11-20 07:22:59.527037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-11-20 07:22:59.527055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:55.027 [2024-11-20 07:22:59.527063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:55.027 [2024-11-20 07:22:59.527237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:55.027 [2024-11-20 07:22:59.527412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.027 [2024-11-20 07:22:59.527421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.027 [2024-11-20 07:22:59.527428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.027 [2024-11-20 07:22:59.527434] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.027 [2024-11-20 07:22:59.536019] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:55.027 [2024-11-20 07:22:59.536044] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:55.027 [2024-11-20 07:22:59.536051] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:55.027 [2024-11-20 07:22:59.536057] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:55.027 [2024-11-20 07:22:59.536062] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:55.027 [2024-11-20 07:22:59.537335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:55.027 [2024-11-20 07:22:59.537447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:55.027 [2024-11-20 07:22:59.537447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:55.027 [2024-11-20 07:22:59.539868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.027 [2024-11-20 07:22:59.540316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-11-20 07:22:59.540335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:55.027 [2024-11-20 07:22:59.540343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:55.027 [2024-11-20 07:22:59.540523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:55.027 [2024-11-20 07:22:59.540707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.027 [2024-11-20 07:22:59.540715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.027 [2024-11-20 07:22:59.540722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.027 [2024-11-20 07:22:59.540729] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.027 [2024-11-20 07:22:59.552968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.027 [2024-11-20 07:22:59.553338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-11-20 07:22:59.553356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:55.027 [2024-11-20 07:22:59.553365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:55.027 [2024-11-20 07:22:59.553543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:55.027 [2024-11-20 07:22:59.553722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.027 [2024-11-20 07:22:59.553731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.027 [2024-11-20 07:22:59.553738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.027 [2024-11-20 07:22:59.553745] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.027 [2024-11-20 07:22:59.566144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.027 [2024-11-20 07:22:59.566534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-11-20 07:22:59.566553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:55.027 [2024-11-20 07:22:59.566561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:55.027 [2024-11-20 07:22:59.566739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:55.027 [2024-11-20 07:22:59.566918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.027 [2024-11-20 07:22:59.566927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.027 [2024-11-20 07:22:59.566934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.028 [2024-11-20 07:22:59.566941] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.287 [2024-11-20 07:22:59.579374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.287 [2024-11-20 07:22:59.579835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.287 [2024-11-20 07:22:59.579855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:55.287 [2024-11-20 07:22:59.579863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:55.287 [2024-11-20 07:22:59.580047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:55.287 [2024-11-20 07:22:59.580227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.287 [2024-11-20 07:22:59.580237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.287 [2024-11-20 07:22:59.580249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.287 [2024-11-20 07:22:59.580256] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.287 [2024-11-20 07:22:59.592475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.287 [2024-11-20 07:22:59.592935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.287 [2024-11-20 07:22:59.592958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:55.287 [2024-11-20 07:22:59.592967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:55.287 [2024-11-20 07:22:59.593146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:55.287 [2024-11-20 07:22:59.593325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.287 [2024-11-20 07:22:59.593334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.287 [2024-11-20 07:22:59.593341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.287 [2024-11-20 07:22:59.593349] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.287 [2024-11-20 07:22:59.605571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.287 [2024-11-20 07:22:59.606012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.287 [2024-11-20 07:22:59.606031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:55.287 [2024-11-20 07:22:59.606040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:55.287 [2024-11-20 07:22:59.606218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:55.287 [2024-11-20 07:22:59.606397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.287 [2024-11-20 07:22:59.606405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.287 [2024-11-20 07:22:59.606412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.287 [2024-11-20 07:22:59.606418] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.287 [2024-11-20 07:22:59.618638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.287 [2024-11-20 07:22:59.619074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.287 [2024-11-20 07:22:59.619092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:55.287 [2024-11-20 07:22:59.619100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:55.287 [2024-11-20 07:22:59.619278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:55.287 [2024-11-20 07:22:59.619457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.287 [2024-11-20 07:22:59.619466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.287 [2024-11-20 07:22:59.619473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.287 [2024-11-20 07:22:59.619480] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.287 [2024-11-20 07:22:59.631848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.287 [2024-11-20 07:22:59.632293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.287 [2024-11-20 07:22:59.632310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:55.287 [2024-11-20 07:22:59.632317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:55.287 [2024-11-20 07:22:59.632495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:55.287 [2024-11-20 07:22:59.632673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.287 [2024-11-20 07:22:59.632681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.287 [2024-11-20 07:22:59.632688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.287 [2024-11-20 07:22:59.632695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.287 07:22:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:55.287 07:22:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:26:55.287 07:22:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:55.287 07:22:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:55.287 07:22:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:55.287 [2024-11-20 07:22:59.644895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.287 [2024-11-20 07:22:59.645335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.287 [2024-11-20 07:22:59.645352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:55.287 [2024-11-20 07:22:59.645360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:55.288 [2024-11-20 07:22:59.645537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:55.288 [2024-11-20 07:22:59.645716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.288 [2024-11-20 07:22:59.645725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.288 [2024-11-20 07:22:59.645732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.288 [2024-11-20 07:22:59.645741] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.288 [2024-11-20 07:22:59.657961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.288 [2024-11-20 07:22:59.658304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.288 [2024-11-20 07:22:59.658321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:55.288 [2024-11-20 07:22:59.658329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:55.288 [2024-11-20 07:22:59.658507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:55.288 [2024-11-20 07:22:59.658686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.288 [2024-11-20 07:22:59.658694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.288 [2024-11-20 07:22:59.658701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.288 [2024-11-20 07:22:59.658707] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.288 [2024-11-20 07:22:59.671103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.288 [2024-11-20 07:22:59.671395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.288 [2024-11-20 07:22:59.671411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:55.288 [2024-11-20 07:22:59.671418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:55.288 [2024-11-20 07:22:59.671596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:55.288 [2024-11-20 07:22:59.671774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.288 [2024-11-20 07:22:59.671783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.288 [2024-11-20 07:22:59.671790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.288 [2024-11-20 07:22:59.671796] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.288 07:22:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:55.288 07:22:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:55.288 07:22:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.288 07:22:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:55.288 [2024-11-20 07:22:59.682798] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:55.288 4971.67 IOPS, 19.42 MiB/s [2024-11-20T06:22:59.844Z] [2024-11-20 07:22:59.685500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.288 [2024-11-20 07:22:59.685864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.288 [2024-11-20 07:22:59.685882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:55.288 [2024-11-20 07:22:59.685890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:55.288 [2024-11-20 07:22:59.686072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:55.288 [2024-11-20 07:22:59.686252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.288 [2024-11-20 07:22:59.686260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.288 [2024-11-20 07:22:59.686267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.288 [2024-11-20 07:22:59.686275] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.288 07:22:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.288 07:22:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:55.288 07:22:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.288 07:22:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:55.288 [2024-11-20 07:22:59.698647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.288 [2024-11-20 07:22:59.699060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.288 [2024-11-20 07:22:59.699078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:55.288 [2024-11-20 07:22:59.699085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:55.288 [2024-11-20 07:22:59.699268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:55.288 [2024-11-20 07:22:59.699445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.288 [2024-11-20 07:22:59.699453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.288 [2024-11-20 07:22:59.699460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.288 [2024-11-20 07:22:59.699466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.288 [2024-11-20 07:22:59.711845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.288 [2024-11-20 07:22:59.712310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.288 [2024-11-20 07:22:59.712327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:55.288 [2024-11-20 07:22:59.712334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:55.288 [2024-11-20 07:22:59.712512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:55.288 [2024-11-20 07:22:59.712690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.288 [2024-11-20 07:22:59.712698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.288 [2024-11-20 07:22:59.712705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.288 [2024-11-20 07:22:59.712711] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.288 Malloc0 00:26:55.288 [2024-11-20 07:22:59.724915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.288 07:22:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.288 [2024-11-20 07:22:59.725352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.288 [2024-11-20 07:22:59.725369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:55.288 [2024-11-20 07:22:59.725376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:55.288 [2024-11-20 07:22:59.725554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:55.288 07:22:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:55.288 [2024-11-20 07:22:59.725733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.288 [2024-11-20 07:22:59.725742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.288 [2024-11-20 07:22:59.725749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.288 [2024-11-20 07:22:59.725756] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.288 07:22:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.288 07:22:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:55.288 07:22:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.288 07:22:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:55.288 07:22:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.288 07:22:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:55.289 [2024-11-20 07:22:59.738124] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.289 [2024-11-20 07:22:59.738559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.289 [2024-11-20 07:22:59.738576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14b9500 with addr=10.0.0.2, port=4420 00:26:55.289 [2024-11-20 07:22:59.738583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b9500 is same with the state(6) to be set 00:26:55.289 [2024-11-20 07:22:59.738761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b9500 (9): Bad file descriptor 00:26:55.289 [2024-11-20 07:22:59.738939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:55.289 [2024-11-20 07:22:59.738954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:55.289 [2024-11-20 07:22:59.738961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:55.289 [2024-11-20 07:22:59.738968] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:55.289 07:22:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.289 07:22:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:55.289 07:22:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.289 07:22:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:55.289 [2024-11-20 07:22:59.748391] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:55.289 [2024-11-20 07:22:59.751330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:55.289 07:22:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.289 07:22:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1345000 00:26:55.547 [2024-11-20 07:22:59.855432] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:26:57.413 5568.86 IOPS, 21.75 MiB/s [2024-11-20T06:23:02.903Z] 6265.50 IOPS, 24.47 MiB/s [2024-11-20T06:23:03.836Z] 6818.56 IOPS, 26.63 MiB/s [2024-11-20T06:23:04.771Z] 7239.50 IOPS, 28.28 MiB/s [2024-11-20T06:23:06.145Z] 7594.82 IOPS, 29.67 MiB/s [2024-11-20T06:23:06.711Z] 7872.33 IOPS, 30.75 MiB/s [2024-11-20T06:23:08.086Z] 8122.23 IOPS, 31.73 MiB/s [2024-11-20T06:23:09.021Z] 8343.14 IOPS, 32.59 MiB/s [2024-11-20T06:23:09.021Z] 8520.87 IOPS, 33.28 MiB/s 00:27:04.465 Latency(us) 00:27:04.465 [2024-11-20T06:23:09.021Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:04.465 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:04.465 Verification LBA range: start 0x0 length 0x4000 00:27:04.465 Nvme1n1 : 15.01 8521.83 33.29 11067.63 0.00 6513.84 448.78 16868.40 00:27:04.465 [2024-11-20T06:23:09.021Z] =================================================================================================================== 00:27:04.465 [2024-11-20T06:23:09.021Z] Total : 8521.83 33.29 11067.63 0.00 6513.84 448.78 16868.40 00:27:04.465 07:23:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:04.465 07:23:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:04.465 07:23:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.465 07:23:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:04.465 07:23:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.465 07:23:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:04.465 07:23:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:04.465 07:23:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:04.465 07:23:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:27:04.465 07:23:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:04.465 07:23:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:27:04.465 07:23:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:04.465 07:23:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:04.465 rmmod nvme_tcp 00:27:04.465 rmmod nvme_fabrics 00:27:04.465 rmmod nvme_keyring 00:27:04.465 07:23:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:04.465 07:23:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:27:04.465 07:23:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:27:04.465 07:23:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 1345935 ']' 00:27:04.465 07:23:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 1345935 00:27:04.465 07:23:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 1345935 ']' 00:27:04.465 07:23:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # kill -0 1345935 00:27:04.465 07:23:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # uname 00:27:04.465 07:23:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:04.465 07:23:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1345935 00:27:04.465 07:23:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:04.465 07:23:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:04.465 07:23:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1345935' 00:27:04.465 killing process with pid 1345935 00:27:04.465 07:23:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@971 -- # kill 1345935 00:27:04.465 07:23:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@976 -- # wait 1345935 00:27:04.724 07:23:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:04.724 07:23:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:04.724 07:23:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:04.724 07:23:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:27:04.724 07:23:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:27:04.724 07:23:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:04.724 07:23:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:27:04.724 07:23:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:04.724 07:23:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:04.724 07:23:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:04.724 07:23:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:04.724 07:23:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:07.269 07:23:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:07.269 00:27:07.269 real 0m25.908s 00:27:07.269 user 1m0.179s 00:27:07.269 sys 0m6.757s 00:27:07.269 07:23:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:07.269 07:23:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:07.269 ************************************ 00:27:07.269 END TEST nvmf_bdevperf 00:27:07.269 ************************************ 00:27:07.269 07:23:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:07.269 07:23:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:07.269 07:23:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:07.269 07:23:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.269 ************************************ 00:27:07.269 START TEST nvmf_target_disconnect 00:27:07.269 ************************************ 00:27:07.269 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:07.269 * Looking for test storage... 00:27:07.269 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:07.269 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:07.269 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:27:07.269 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:07.269 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:07.269 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:07.269 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:07.269 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:07.269 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:27:07.269 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:27:07.269 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:27:07.269 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:27:07.269 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:27:07.269 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:27:07.269 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:27:07.269 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:07.269 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:07.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:07.270 --rc genhtml_branch_coverage=1 00:27:07.270 --rc genhtml_function_coverage=1 00:27:07.270 --rc genhtml_legend=1 00:27:07.270 --rc geninfo_all_blocks=1 00:27:07.270 --rc geninfo_unexecuted_blocks=1 00:27:07.270 00:27:07.270 ' 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:07.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:07.270 --rc genhtml_branch_coverage=1 00:27:07.270 --rc genhtml_function_coverage=1 00:27:07.270 --rc genhtml_legend=1 00:27:07.270 --rc geninfo_all_blocks=1 00:27:07.270 --rc geninfo_unexecuted_blocks=1 00:27:07.270 00:27:07.270 ' 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:07.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:07.270 --rc genhtml_branch_coverage=1 00:27:07.270 --rc genhtml_function_coverage=1 00:27:07.270 --rc genhtml_legend=1 00:27:07.270 --rc geninfo_all_blocks=1 00:27:07.270 --rc geninfo_unexecuted_blocks=1 00:27:07.270 00:27:07.270 ' 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:07.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:07.270 --rc genhtml_branch_coverage=1 00:27:07.270 --rc genhtml_function_coverage=1 00:27:07.270 --rc genhtml_legend=1 00:27:07.270 --rc geninfo_all_blocks=1 00:27:07.270 --rc geninfo_unexecuted_blocks=1 00:27:07.270 00:27:07.270 ' 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.270 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:07.271 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.271 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:27:07.271 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:07.271 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:07.271 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:07.271 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:07.271 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:07.271 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:07.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:07.271 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:07.271 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:07.271 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:07.271 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:07.271 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:07.271 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:07.271 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:07.271 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:07.271 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:07.271 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:07.271 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:07.271 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:07.271 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:07.271 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:07.271 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:07.271 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:07.271 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:07.271 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:27:07.271 07:23:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:13.843 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:13.843 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:27:13.843 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:13.843 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:13.843 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:13.843 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:13.843 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:13.843 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:27:13.843 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:13.843 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:27:13.843 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:27:13.843 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:27:13.843 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:27:13.843 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:27:13.843 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:27:13.843 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:13.843 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:13.843 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:13.843 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:13.843 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:13.843 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:13.843 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:13.843 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:13.843 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:13.844 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:13.844 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:13.844 Found net devices under 0000:86:00.0: cvl_0_0 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:13.844 Found net devices under 0000:86:00.1: cvl_0_1 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:13.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:13.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.435 ms 00:27:13.844 00:27:13.844 --- 10.0.0.2 ping statistics --- 00:27:13.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.844 rtt min/avg/max/mdev = 0.435/0.435/0.435/0.000 ms 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:13.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:13.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:27:13.844 00:27:13.844 --- 10.0.0.1 ping statistics --- 00:27:13.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.844 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:13.844 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:13.845 ************************************ 00:27:13.845 START TEST nvmf_target_disconnect_tc1 00:27:13.845 ************************************ 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc1 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:13.845 [2024-11-20 07:23:17.629536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:13.845 [2024-11-20 07:23:17.629580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b84ab0 with addr=10.0.0.2, port=4420 00:27:13.845 [2024-11-20 07:23:17.629602] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:13.845 [2024-11-20 07:23:17.629611] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:13.845 [2024-11-20 07:23:17.629617] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:27:13.845 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:13.845 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:13.845 Initializing NVMe Controllers 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:13.845 00:27:13.845 real 0m0.117s 00:27:13.845 user 0m0.056s 00:27:13.845 sys 0m0.061s 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:13.845 ************************************ 00:27:13.845 END TEST nvmf_target_disconnect_tc1 00:27:13.845 ************************************ 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:13.845 ************************************ 00:27:13.845 START TEST nvmf_target_disconnect_tc2 00:27:13.845 ************************************ 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc2 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1351002 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1351002 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 1351002 ']' 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:13.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:13.845 [2024-11-20 07:23:17.769446] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:27:13.845 [2024-11-20 07:23:17.769486] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:13.845 [2024-11-20 07:23:17.850074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:13.845 [2024-11-20 07:23:17.892266] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:13.845 [2024-11-20 07:23:17.892306] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:13.845 [2024-11-20 07:23:17.892313] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:13.845 [2024-11-20 07:23:17.892319] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:13.845 [2024-11-20 07:23:17.892324] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:13.845 [2024-11-20 07:23:17.894004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:13.845 [2024-11-20 07:23:17.894112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:13.845 [2024-11-20 07:23:17.894226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:13.845 [2024-11-20 07:23:17.894227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:13.845 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:13.846 07:23:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:13.846 07:23:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:13.846 07:23:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:13.846 07:23:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.846 07:23:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:13.846 Malloc0 00:27:13.846 07:23:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.846 07:23:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:13.846 07:23:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.846 07:23:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:13.846 [2024-11-20 07:23:18.074732] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:13.846 07:23:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.846 07:23:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:13.846 07:23:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.846 07:23:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:13.846 07:23:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.846 07:23:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:13.846 07:23:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.846 07:23:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:13.846 07:23:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.846 07:23:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:13.846 07:23:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.846 07:23:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:13.846 [2024-11-20 07:23:18.106976] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:13.846 07:23:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.846 07:23:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:13.846 07:23:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.846 07:23:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:13.846 07:23:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.846 07:23:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1351122 00:27:13.846 07:23:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:13.846 07:23:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:15.756 07:23:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1351002 00:27:15.756 07:23:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:15.756 Read completed with error (sct=0, sc=8) 00:27:15.756 starting I/O failed 00:27:15.756 Write completed with error (sct=0, sc=8) 00:27:15.756 starting I/O failed 00:27:15.756 Read completed with error (sct=0, sc=8) 00:27:15.756 starting I/O failed 00:27:15.756 Read completed with error (sct=0, sc=8) 00:27:15.756 starting I/O failed 00:27:15.756 Read completed with error (sct=0, sc=8) 00:27:15.756 starting I/O failed 00:27:15.756 Write completed with error (sct=0, sc=8) 00:27:15.756 starting I/O failed 00:27:15.756 Write completed with error (sct=0, sc=8) 00:27:15.756 starting I/O failed 00:27:15.756 Read completed with error (sct=0, sc=8) 00:27:15.756 starting I/O failed 00:27:15.756 Write completed with error (sct=0, sc=8) 00:27:15.756 starting I/O failed 00:27:15.756 Read completed with error (sct=0, sc=8) 00:27:15.756 starting I/O failed 00:27:15.756 Read completed with error (sct=0, sc=8) 00:27:15.756 starting I/O failed 00:27:15.756 Write completed with error (sct=0, sc=8) 00:27:15.756 starting I/O failed 00:27:15.756 Read completed with error (sct=0, sc=8) 00:27:15.756 starting I/O failed 00:27:15.756 Read completed with error (sct=0, sc=8) 00:27:15.756 starting I/O failed 00:27:15.756 Write completed with error (sct=0, sc=8) 00:27:15.756 starting I/O failed 00:27:15.756 Read completed with error (sct=0, sc=8) 00:27:15.756 starting I/O failed 00:27:15.756 Write completed with error (sct=0, sc=8) 00:27:15.756 starting I/O failed 00:27:15.756 Read completed with error (sct=0, sc=8) 00:27:15.756 starting I/O failed 00:27:15.756 Read completed with error (sct=0, sc=8) 00:27:15.756 starting I/O failed 00:27:15.756 Write completed with error (sct=0, sc=8) 00:27:15.756 starting I/O failed 00:27:15.756 Read completed with error (sct=0, sc=8) 00:27:15.756 starting I/O failed 00:27:15.756 Read completed with error (sct=0, sc=8) 00:27:15.756 starting I/O failed 00:27:15.756 Write completed with error (sct=0, sc=8) 00:27:15.756 starting I/O failed 00:27:15.756 Write completed with error (sct=0, sc=8) 00:27:15.756 starting I/O failed 00:27:15.756 Write completed with error (sct=0, sc=8) 00:27:15.756 starting I/O failed 00:27:15.756 Write completed with error (sct=0, sc=8) 00:27:15.756 starting I/O failed 00:27:15.756 Write completed with error (sct=0, sc=8) 00:27:15.756 starting I/O failed 00:27:15.756 Read completed with error (sct=0, sc=8) 00:27:15.756 starting I/O failed 00:27:15.756 Read completed with error (sct=0, sc=8) 00:27:15.756 starting I/O failed 00:27:15.756 Write completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Write completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Write completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 [2024-11-20 07:23:20.135159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Write completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Write completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Write completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Write completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Write completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Write completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Write completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Write completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Write completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 [2024-11-20 07:23:20.135357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Write completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Write completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Write completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Write completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Write completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Write completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Write completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Write completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Write completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Write completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Write completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Write completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 [2024-11-20 07:23:20.135556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Write completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Write completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Write completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Write completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Write completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Write completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Write completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Write completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Write completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Write completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Write completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Write completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Write completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Write completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Write completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Write completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Write completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Write completed with error (sct=0, sc=8) 00:27:15.757 starting I/O failed 00:27:15.757 Read completed with error (sct=0, sc=8) 00:27:15.758 starting I/O failed 00:27:15.758 Write completed with error (sct=0, sc=8) 00:27:15.758 starting I/O failed 00:27:15.758 Read completed with error (sct=0, sc=8) 00:27:15.758 starting I/O failed 00:27:15.758 Write completed with error (sct=0, sc=8) 00:27:15.758 starting I/O failed 00:27:15.758 [2024-11-20 07:23:20.135747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:15.758 [2024-11-20 07:23:20.136007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-11-20 07:23:20.136030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-11-20 07:23:20.136184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-11-20 07:23:20.136196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-11-20 07:23:20.136426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-11-20 07:23:20.136437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-11-20 07:23:20.136546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-11-20 07:23:20.136560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-11-20 07:23:20.136709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-11-20 07:23:20.136719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-11-20 07:23:20.136891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-11-20 07:23:20.136901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-11-20 07:23:20.137255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-11-20 07:23:20.137288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-11-20 07:23:20.137482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-11-20 07:23:20.137513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-11-20 07:23:20.137766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-11-20 07:23:20.137777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-11-20 07:23:20.137934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-11-20 07:23:20.137945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-11-20 07:23:20.138160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-11-20 07:23:20.138192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-11-20 07:23:20.138463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-11-20 07:23:20.138494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-11-20 07:23:20.138695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-11-20 07:23:20.138728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-11-20 07:23:20.138915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-11-20 07:23:20.138946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-11-20 07:23:20.139172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-11-20 07:23:20.139204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-11-20 07:23:20.139383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-11-20 07:23:20.139394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-11-20 07:23:20.139537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-11-20 07:23:20.139571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-11-20 07:23:20.139720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-11-20 07:23:20.139752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-11-20 07:23:20.140030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-11-20 07:23:20.140064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-11-20 07:23:20.140270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-11-20 07:23:20.140280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-11-20 07:23:20.140361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-11-20 07:23:20.140371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-11-20 07:23:20.140539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-11-20 07:23:20.140549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-11-20 07:23:20.140696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-11-20 07:23:20.140707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-11-20 07:23:20.140784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-11-20 07:23:20.140794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-11-20 07:23:20.141046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-11-20 07:23:20.141079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-11-20 07:23:20.141272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-11-20 07:23:20.141304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-11-20 07:23:20.141439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-11-20 07:23:20.141471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-11-20 07:23:20.141667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-11-20 07:23:20.141677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-11-20 07:23:20.141908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-11-20 07:23:20.141918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.758 qpair failed and we were unable to recover it. 00:27:15.758 [2024-11-20 07:23:20.142130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.758 [2024-11-20 07:23:20.142164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-11-20 07:23:20.142462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-11-20 07:23:20.142495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-11-20 07:23:20.142715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-11-20 07:23:20.142725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-11-20 07:23:20.142891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-11-20 07:23:20.142901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-11-20 07:23:20.143127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-11-20 07:23:20.143160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-11-20 07:23:20.143341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-11-20 07:23:20.143373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-11-20 07:23:20.143630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-11-20 07:23:20.143643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-11-20 07:23:20.143869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-11-20 07:23:20.143881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-11-20 07:23:20.144079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-11-20 07:23:20.144092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-11-20 07:23:20.144319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-11-20 07:23:20.144351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-11-20 07:23:20.144526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-11-20 07:23:20.144558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-11-20 07:23:20.144740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-11-20 07:23:20.144773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-11-20 07:23:20.145064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-11-20 07:23:20.145097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-11-20 07:23:20.145304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-11-20 07:23:20.145335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-11-20 07:23:20.145550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-11-20 07:23:20.145565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-11-20 07:23:20.145812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-11-20 07:23:20.145825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-11-20 07:23:20.146062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-11-20 07:23:20.146075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-11-20 07:23:20.146276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-11-20 07:23:20.146288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-11-20 07:23:20.146515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-11-20 07:23:20.146528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-11-20 07:23:20.146754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-11-20 07:23:20.146766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-11-20 07:23:20.146865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-11-20 07:23:20.146876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-11-20 07:23:20.147030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-11-20 07:23:20.147042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-11-20 07:23:20.147180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-11-20 07:23:20.147192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-11-20 07:23:20.147336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-11-20 07:23:20.147348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-11-20 07:23:20.147444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-11-20 07:23:20.147455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-11-20 07:23:20.147607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-11-20 07:23:20.147620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-11-20 07:23:20.147839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-11-20 07:23:20.147851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-11-20 07:23:20.148099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-11-20 07:23:20.148112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-11-20 07:23:20.148266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-11-20 07:23:20.148279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-11-20 07:23:20.148349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-11-20 07:23:20.148360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-11-20 07:23:20.148443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-11-20 07:23:20.148454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.759 [2024-11-20 07:23:20.148611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.759 [2024-11-20 07:23:20.148623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.759 qpair failed and we were unable to recover it. 00:27:15.760 [2024-11-20 07:23:20.148712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-11-20 07:23:20.148723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-11-20 07:23:20.148808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-11-20 07:23:20.148819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-11-20 07:23:20.148959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-11-20 07:23:20.148972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-11-20 07:23:20.149193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-11-20 07:23:20.149205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-11-20 07:23:20.149369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-11-20 07:23:20.149381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-11-20 07:23:20.149539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-11-20 07:23:20.149552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-11-20 07:23:20.149683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-11-20 07:23:20.149695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-11-20 07:23:20.149920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-11-20 07:23:20.149932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-11-20 07:23:20.150019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-11-20 07:23:20.150032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-11-20 07:23:20.150253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-11-20 07:23:20.150285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-11-20 07:23:20.150574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-11-20 07:23:20.150606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-11-20 07:23:20.150795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-11-20 07:23:20.150827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-11-20 07:23:20.151036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-11-20 07:23:20.151069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-11-20 07:23:20.151280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-11-20 07:23:20.151293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-11-20 07:23:20.151490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-11-20 07:23:20.151502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-11-20 07:23:20.151724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-11-20 07:23:20.151737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-11-20 07:23:20.151876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-11-20 07:23:20.151889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-11-20 07:23:20.152133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-11-20 07:23:20.152146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-11-20 07:23:20.152317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-11-20 07:23:20.152329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-11-20 07:23:20.152481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-11-20 07:23:20.152508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-11-20 07:23:20.152811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-11-20 07:23:20.152843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-11-20 07:23:20.153079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-11-20 07:23:20.153112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-11-20 07:23:20.153230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-11-20 07:23:20.153275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-11-20 07:23:20.153484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-11-20 07:23:20.153516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-11-20 07:23:20.153701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-11-20 07:23:20.153732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-11-20 07:23:20.153996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-11-20 07:23:20.154029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-11-20 07:23:20.154155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-11-20 07:23:20.154187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-11-20 07:23:20.154501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-11-20 07:23:20.154532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-11-20 07:23:20.154797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-11-20 07:23:20.154829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-11-20 07:23:20.155102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-11-20 07:23:20.155135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-11-20 07:23:20.155417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.760 [2024-11-20 07:23:20.155448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.760 qpair failed and we were unable to recover it. 00:27:15.760 [2024-11-20 07:23:20.155727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-11-20 07:23:20.155759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-11-20 07:23:20.155992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-11-20 07:23:20.156025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-11-20 07:23:20.156254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-11-20 07:23:20.156287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-11-20 07:23:20.156461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-11-20 07:23:20.156492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-11-20 07:23:20.156739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-11-20 07:23:20.156771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-11-20 07:23:20.157018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-11-20 07:23:20.157051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-11-20 07:23:20.157243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-11-20 07:23:20.157275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-11-20 07:23:20.157540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-11-20 07:23:20.157571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-11-20 07:23:20.157808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-11-20 07:23:20.157839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-11-20 07:23:20.158044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-11-20 07:23:20.158077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-11-20 07:23:20.158211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-11-20 07:23:20.158242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-11-20 07:23:20.158504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-11-20 07:23:20.158535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-11-20 07:23:20.158797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-11-20 07:23:20.158829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-11-20 07:23:20.159092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-11-20 07:23:20.159124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-11-20 07:23:20.159332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-11-20 07:23:20.159363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-11-20 07:23:20.159535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-11-20 07:23:20.159567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-11-20 07:23:20.159835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-11-20 07:23:20.159866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-11-20 07:23:20.160159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-11-20 07:23:20.160192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-11-20 07:23:20.160461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-11-20 07:23:20.160493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-11-20 07:23:20.160784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-11-20 07:23:20.160815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-11-20 07:23:20.161109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-11-20 07:23:20.161142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-11-20 07:23:20.161344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-11-20 07:23:20.161376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-11-20 07:23:20.161641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-11-20 07:23:20.161672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-11-20 07:23:20.161971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-11-20 07:23:20.162004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-11-20 07:23:20.162254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-11-20 07:23:20.162287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-11-20 07:23:20.162553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-11-20 07:23:20.162585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-11-20 07:23:20.162776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-11-20 07:23:20.162808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-11-20 07:23:20.162994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-11-20 07:23:20.163027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-11-20 07:23:20.163225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-11-20 07:23:20.163257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-11-20 07:23:20.163465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-11-20 07:23:20.163497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-11-20 07:23:20.163696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.761 [2024-11-20 07:23:20.163728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.761 qpair failed and we were unable to recover it. 00:27:15.761 [2024-11-20 07:23:20.163927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.762 [2024-11-20 07:23:20.163974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-11-20 07:23:20.164235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.762 [2024-11-20 07:23:20.164267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-11-20 07:23:20.164571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.762 [2024-11-20 07:23:20.164603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-11-20 07:23:20.164864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.762 [2024-11-20 07:23:20.164896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-11-20 07:23:20.165197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.762 [2024-11-20 07:23:20.165231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-11-20 07:23:20.165453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.762 [2024-11-20 07:23:20.165486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-11-20 07:23:20.165673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.762 [2024-11-20 07:23:20.165705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-11-20 07:23:20.165943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.762 [2024-11-20 07:23:20.165986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-11-20 07:23:20.166226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.762 [2024-11-20 07:23:20.166258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-11-20 07:23:20.166519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.762 [2024-11-20 07:23:20.166551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-11-20 07:23:20.166768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.762 [2024-11-20 07:23:20.166800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-11-20 07:23:20.167033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.762 [2024-11-20 07:23:20.167066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-11-20 07:23:20.167262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.762 [2024-11-20 07:23:20.167293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-11-20 07:23:20.167435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.762 [2024-11-20 07:23:20.167465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-11-20 07:23:20.167643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.762 [2024-11-20 07:23:20.167673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-11-20 07:23:20.167880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.762 [2024-11-20 07:23:20.167910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-11-20 07:23:20.168176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.762 [2024-11-20 07:23:20.168208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-11-20 07:23:20.168419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.762 [2024-11-20 07:23:20.168450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-11-20 07:23:20.168689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.762 [2024-11-20 07:23:20.168720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-11-20 07:23:20.168966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.762 [2024-11-20 07:23:20.169000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-11-20 07:23:20.169210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.762 [2024-11-20 07:23:20.169241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-11-20 07:23:20.169520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.762 [2024-11-20 07:23:20.169552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-11-20 07:23:20.169754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.762 [2024-11-20 07:23:20.169785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-11-20 07:23:20.169975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.762 [2024-11-20 07:23:20.170007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-11-20 07:23:20.170224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.762 [2024-11-20 07:23:20.170255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-11-20 07:23:20.170492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.762 [2024-11-20 07:23:20.170522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.762 qpair failed and we were unable to recover it. 00:27:15.762 [2024-11-20 07:23:20.170777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.762 [2024-11-20 07:23:20.170807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.763 qpair failed and we were unable to recover it. 00:27:15.763 [2024-11-20 07:23:20.171062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.763 [2024-11-20 07:23:20.171096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.763 qpair failed and we were unable to recover it. 00:27:15.763 [2024-11-20 07:23:20.171355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.763 [2024-11-20 07:23:20.171386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.763 qpair failed and we were unable to recover it. 00:27:15.763 [2024-11-20 07:23:20.171677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.763 [2024-11-20 07:23:20.171708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.763 qpair failed and we were unable to recover it. 00:27:15.763 [2024-11-20 07:23:20.171974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.763 [2024-11-20 07:23:20.172006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.763 qpair failed and we were unable to recover it. 00:27:15.763 [2024-11-20 07:23:20.172291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.763 [2024-11-20 07:23:20.172322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.763 qpair failed and we were unable to recover it. 00:27:15.763 [2024-11-20 07:23:20.172599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.763 [2024-11-20 07:23:20.172630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.763 qpair failed and we were unable to recover it. 00:27:15.763 [2024-11-20 07:23:20.172816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.763 [2024-11-20 07:23:20.172848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.763 qpair failed and we were unable to recover it. 00:27:15.763 [2024-11-20 07:23:20.173059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.763 [2024-11-20 07:23:20.173091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.763 qpair failed and we were unable to recover it. 00:27:15.763 [2024-11-20 07:23:20.173279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.763 [2024-11-20 07:23:20.173309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.763 qpair failed and we were unable to recover it. 00:27:15.763 [2024-11-20 07:23:20.173573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.763 [2024-11-20 07:23:20.173605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.763 qpair failed and we were unable to recover it. 00:27:15.763 [2024-11-20 07:23:20.173893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.763 [2024-11-20 07:23:20.173924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.763 qpair failed and we were unable to recover it. 00:27:15.763 [2024-11-20 07:23:20.174175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.763 [2024-11-20 07:23:20.174207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.763 qpair failed and we were unable to recover it. 00:27:15.763 [2024-11-20 07:23:20.174516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.763 [2024-11-20 07:23:20.174548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.763 qpair failed and we were unable to recover it. 00:27:15.763 [2024-11-20 07:23:20.174800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.763 [2024-11-20 07:23:20.174837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.763 qpair failed and we were unable to recover it. 00:27:15.763 [2024-11-20 07:23:20.175123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.763 [2024-11-20 07:23:20.175156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.763 qpair failed and we were unable to recover it. 00:27:15.763 [2024-11-20 07:23:20.175338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.763 [2024-11-20 07:23:20.175369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.763 qpair failed and we were unable to recover it. 00:27:15.763 [2024-11-20 07:23:20.175555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.763 [2024-11-20 07:23:20.175586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.763 qpair failed and we were unable to recover it. 00:27:15.763 [2024-11-20 07:23:20.175822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.763 [2024-11-20 07:23:20.175853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.763 qpair failed and we were unable to recover it. 00:27:15.763 [2024-11-20 07:23:20.176110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.763 [2024-11-20 07:23:20.176143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.763 qpair failed and we were unable to recover it. 00:27:15.763 [2024-11-20 07:23:20.176325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.763 [2024-11-20 07:23:20.176357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.763 qpair failed and we were unable to recover it. 00:27:15.763 [2024-11-20 07:23:20.176566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.763 [2024-11-20 07:23:20.176597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.763 qpair failed and we were unable to recover it. 00:27:15.763 [2024-11-20 07:23:20.176862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.763 [2024-11-20 07:23:20.176893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.763 qpair failed and we were unable to recover it. 00:27:15.763 [2024-11-20 07:23:20.177086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.763 [2024-11-20 07:23:20.177119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.763 qpair failed and we were unable to recover it. 00:27:15.763 [2024-11-20 07:23:20.177307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.763 [2024-11-20 07:23:20.177337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.763 qpair failed and we were unable to recover it. 00:27:15.763 [2024-11-20 07:23:20.177578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.763 [2024-11-20 07:23:20.177609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.763 qpair failed and we were unable to recover it. 00:27:15.763 [2024-11-20 07:23:20.177847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.763 [2024-11-20 07:23:20.177878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.763 qpair failed and we were unable to recover it. 00:27:15.763 [2024-11-20 07:23:20.178142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.763 [2024-11-20 07:23:20.178174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.763 qpair failed and we were unable to recover it. 00:27:15.763 [2024-11-20 07:23:20.178450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.763 [2024-11-20 07:23:20.178482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.763 qpair failed and we were unable to recover it. 00:27:15.763 [2024-11-20 07:23:20.178743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.763 [2024-11-20 07:23:20.178775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.763 qpair failed and we were unable to recover it. 00:27:15.763 [2024-11-20 07:23:20.179013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.763 [2024-11-20 07:23:20.179046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.763 qpair failed and we were unable to recover it. 00:27:15.763 [2024-11-20 07:23:20.179287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.763 [2024-11-20 07:23:20.179318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.763 qpair failed and we were unable to recover it. 00:27:15.763 [2024-11-20 07:23:20.179553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.764 [2024-11-20 07:23:20.179584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.764 qpair failed and we were unable to recover it. 00:27:15.764 [2024-11-20 07:23:20.179828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.764 [2024-11-20 07:23:20.179859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.764 qpair failed and we were unable to recover it. 00:27:15.764 [2024-11-20 07:23:20.180055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.764 [2024-11-20 07:23:20.180087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.764 qpair failed and we were unable to recover it. 00:27:15.764 [2024-11-20 07:23:20.180224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.764 [2024-11-20 07:23:20.180255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.764 qpair failed and we were unable to recover it. 00:27:15.764 [2024-11-20 07:23:20.180586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.764 [2024-11-20 07:23:20.180618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.764 qpair failed and we were unable to recover it. 00:27:15.764 [2024-11-20 07:23:20.180878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.764 [2024-11-20 07:23:20.180909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.764 qpair failed and we were unable to recover it. 00:27:15.764 [2024-11-20 07:23:20.181211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.764 [2024-11-20 07:23:20.181243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.764 qpair failed and we were unable to recover it. 00:27:15.764 [2024-11-20 07:23:20.181365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.764 [2024-11-20 07:23:20.181396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.764 qpair failed and we were unable to recover it. 00:27:15.764 [2024-11-20 07:23:20.181586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.764 [2024-11-20 07:23:20.181617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.764 qpair failed and we were unable to recover it. 00:27:15.764 [2024-11-20 07:23:20.181864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.764 [2024-11-20 07:23:20.181895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.764 qpair failed and we were unable to recover it. 00:27:15.764 [2024-11-20 07:23:20.182094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.764 [2024-11-20 07:23:20.182126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.764 qpair failed and we were unable to recover it. 00:27:15.764 [2024-11-20 07:23:20.182366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.764 [2024-11-20 07:23:20.182397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.764 qpair failed and we were unable to recover it. 00:27:15.764 [2024-11-20 07:23:20.182638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.764 [2024-11-20 07:23:20.182669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.764 qpair failed and we were unable to recover it. 00:27:15.764 [2024-11-20 07:23:20.182970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.764 [2024-11-20 07:23:20.183003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.764 qpair failed and we were unable to recover it. 00:27:15.764 [2024-11-20 07:23:20.183115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.764 [2024-11-20 07:23:20.183146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.764 qpair failed and we were unable to recover it. 00:27:15.764 [2024-11-20 07:23:20.183407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.764 [2024-11-20 07:23:20.183438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.764 qpair failed and we were unable to recover it. 00:27:15.764 [2024-11-20 07:23:20.183745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.764 [2024-11-20 07:23:20.183775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.764 qpair failed and we were unable to recover it. 00:27:15.764 [2024-11-20 07:23:20.183987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.764 [2024-11-20 07:23:20.184020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.764 qpair failed and we were unable to recover it. 00:27:15.764 [2024-11-20 07:23:20.184313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.764 [2024-11-20 07:23:20.184345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.764 qpair failed and we were unable to recover it. 00:27:15.764 [2024-11-20 07:23:20.184581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.764 [2024-11-20 07:23:20.184612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.764 qpair failed and we were unable to recover it. 00:27:15.764 [2024-11-20 07:23:20.184875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.764 [2024-11-20 07:23:20.184906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.764 qpair failed and we were unable to recover it. 00:27:15.764 [2024-11-20 07:23:20.185156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.764 [2024-11-20 07:23:20.185189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.764 qpair failed and we were unable to recover it. 00:27:15.764 [2024-11-20 07:23:20.185312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.764 [2024-11-20 07:23:20.185350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.764 qpair failed and we were unable to recover it. 00:27:15.764 [2024-11-20 07:23:20.185522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.764 [2024-11-20 07:23:20.185554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.764 qpair failed and we were unable to recover it. 00:27:15.764 [2024-11-20 07:23:20.185799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.764 [2024-11-20 07:23:20.185831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.764 qpair failed and we were unable to recover it. 00:27:15.764 [2024-11-20 07:23:20.185966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.764 [2024-11-20 07:23:20.185999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.764 qpair failed and we were unable to recover it. 00:27:15.764 [2024-11-20 07:23:20.186194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.764 [2024-11-20 07:23:20.186226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.764 qpair failed and we were unable to recover it. 00:27:15.764 [2024-11-20 07:23:20.186464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.764 [2024-11-20 07:23:20.186495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.764 qpair failed and we were unable to recover it. 00:27:15.764 [2024-11-20 07:23:20.186764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.764 [2024-11-20 07:23:20.186797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.764 qpair failed and we were unable to recover it. 00:27:15.764 [2024-11-20 07:23:20.186983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.764 [2024-11-20 07:23:20.187015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.764 qpair failed and we were unable to recover it. 00:27:15.764 [2024-11-20 07:23:20.187189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.764 [2024-11-20 07:23:20.187220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.764 qpair failed and we were unable to recover it. 00:27:15.764 [2024-11-20 07:23:20.187470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.764 [2024-11-20 07:23:20.187502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.765 qpair failed and we were unable to recover it. 00:27:15.765 [2024-11-20 07:23:20.187715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.765 [2024-11-20 07:23:20.187746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.765 qpair failed and we were unable to recover it. 00:27:15.765 [2024-11-20 07:23:20.187987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.765 [2024-11-20 07:23:20.188021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.765 qpair failed and we were unable to recover it. 00:27:15.765 [2024-11-20 07:23:20.188290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.765 [2024-11-20 07:23:20.188321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.765 qpair failed and we were unable to recover it. 00:27:15.765 [2024-11-20 07:23:20.188565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.765 [2024-11-20 07:23:20.188597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.765 qpair failed and we were unable to recover it. 00:27:15.765 [2024-11-20 07:23:20.188794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.765 [2024-11-20 07:23:20.188825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.765 qpair failed and we were unable to recover it. 00:27:15.765 [2024-11-20 07:23:20.189007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.765 [2024-11-20 07:23:20.189041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.765 qpair failed and we were unable to recover it. 00:27:15.765 [2024-11-20 07:23:20.189233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.765 [2024-11-20 07:23:20.189266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.765 qpair failed and we were unable to recover it. 00:27:15.765 [2024-11-20 07:23:20.189527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.765 [2024-11-20 07:23:20.189559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.765 qpair failed and we were unable to recover it. 00:27:15.765 [2024-11-20 07:23:20.189798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.765 [2024-11-20 07:23:20.189830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.765 qpair failed and we were unable to recover it. 00:27:15.765 [2024-11-20 07:23:20.190062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.765 [2024-11-20 07:23:20.190120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.765 qpair failed and we were unable to recover it. 00:27:15.765 [2024-11-20 07:23:20.190413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.765 [2024-11-20 07:23:20.190444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.765 qpair failed and we were unable to recover it. 00:27:15.765 [2024-11-20 07:23:20.190719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.765 [2024-11-20 07:23:20.190750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.765 qpair failed and we were unable to recover it. 00:27:15.765 [2024-11-20 07:23:20.190939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.765 [2024-11-20 07:23:20.190979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.765 qpair failed and we were unable to recover it. 00:27:15.765 [2024-11-20 07:23:20.191219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.765 [2024-11-20 07:23:20.191251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.765 qpair failed and we were unable to recover it. 00:27:15.765 [2024-11-20 07:23:20.191388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.765 [2024-11-20 07:23:20.191419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.765 qpair failed and we were unable to recover it. 00:27:15.765 [2024-11-20 07:23:20.191669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.765 [2024-11-20 07:23:20.191699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.765 qpair failed and we were unable to recover it. 00:27:15.765 [2024-11-20 07:23:20.191882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.765 [2024-11-20 07:23:20.191913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:15.765 qpair failed and we were unable to recover it. 00:27:15.765 [2024-11-20 07:23:20.192112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.765 [2024-11-20 07:23:20.192204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.765 qpair failed and we were unable to recover it. 00:27:15.765 [2024-11-20 07:23:20.192532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.765 [2024-11-20 07:23:20.192568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.765 qpair failed and we were unable to recover it. 00:27:15.765 [2024-11-20 07:23:20.192867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.765 [2024-11-20 07:23:20.192904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.765 qpair failed and we were unable to recover it. 00:27:15.765 [2024-11-20 07:23:20.193034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.765 [2024-11-20 07:23:20.193070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.765 qpair failed and we were unable to recover it. 00:27:15.765 [2024-11-20 07:23:20.193212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.765 [2024-11-20 07:23:20.193242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.765 qpair failed and we were unable to recover it. 00:27:15.765 [2024-11-20 07:23:20.193437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.765 [2024-11-20 07:23:20.193469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.765 qpair failed and we were unable to recover it. 00:27:15.765 [2024-11-20 07:23:20.193664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.765 [2024-11-20 07:23:20.193696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.765 qpair failed and we were unable to recover it. 00:27:15.765 [2024-11-20 07:23:20.193910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.765 [2024-11-20 07:23:20.193943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.765 qpair failed and we were unable to recover it. 00:27:15.765 [2024-11-20 07:23:20.194078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.765 [2024-11-20 07:23:20.194111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.765 qpair failed and we were unable to recover it. 00:27:15.765 [2024-11-20 07:23:20.194374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.765 [2024-11-20 07:23:20.194406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.765 qpair failed and we were unable to recover it. 00:27:15.765 [2024-11-20 07:23:20.194661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.765 [2024-11-20 07:23:20.194693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.765 qpair failed and we were unable to recover it. 00:27:15.765 [2024-11-20 07:23:20.194962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.765 [2024-11-20 07:23:20.194995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.765 qpair failed and we were unable to recover it. 00:27:15.765 [2024-11-20 07:23:20.195263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.765 [2024-11-20 07:23:20.195295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.765 qpair failed and we were unable to recover it. 00:27:15.765 [2024-11-20 07:23:20.195575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.765 [2024-11-20 07:23:20.195617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.765 qpair failed and we were unable to recover it. 00:27:15.765 [2024-11-20 07:23:20.195887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.766 [2024-11-20 07:23:20.195919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.766 qpair failed and we were unable to recover it. 00:27:15.766 [2024-11-20 07:23:20.196198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.766 [2024-11-20 07:23:20.196231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.766 qpair failed and we were unable to recover it. 00:27:15.766 [2024-11-20 07:23:20.196438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.766 [2024-11-20 07:23:20.196470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.766 qpair failed and we were unable to recover it. 00:27:15.766 [2024-11-20 07:23:20.196679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.766 [2024-11-20 07:23:20.196710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.766 qpair failed and we were unable to recover it. 00:27:15.766 [2024-11-20 07:23:20.196980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.766 [2024-11-20 07:23:20.197014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.766 qpair failed and we were unable to recover it. 00:27:15.766 [2024-11-20 07:23:20.197158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.766 [2024-11-20 07:23:20.197190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.766 qpair failed and we were unable to recover it. 00:27:15.766 [2024-11-20 07:23:20.197388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.766 [2024-11-20 07:23:20.197418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.766 qpair failed and we were unable to recover it. 00:27:15.766 [2024-11-20 07:23:20.197615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.766 [2024-11-20 07:23:20.197646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.766 qpair failed and we were unable to recover it. 00:27:15.766 [2024-11-20 07:23:20.197883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.766 [2024-11-20 07:23:20.197914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.766 qpair failed and we were unable to recover it. 00:27:15.766 [2024-11-20 07:23:20.198173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.766 [2024-11-20 07:23:20.198205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.766 qpair failed and we were unable to recover it. 00:27:15.766 [2024-11-20 07:23:20.198446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.766 [2024-11-20 07:23:20.198477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.766 qpair failed and we were unable to recover it. 00:27:15.766 [2024-11-20 07:23:20.198583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.766 [2024-11-20 07:23:20.198613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.766 qpair failed and we were unable to recover it. 00:27:15.766 [2024-11-20 07:23:20.198881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.766 [2024-11-20 07:23:20.198912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.766 qpair failed and we were unable to recover it. 00:27:15.766 [2024-11-20 07:23:20.199209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.766 [2024-11-20 07:23:20.199242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.766 qpair failed and we were unable to recover it. 00:27:15.766 [2024-11-20 07:23:20.199379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.766 [2024-11-20 07:23:20.199409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.766 qpair failed and we were unable to recover it. 00:27:15.766 [2024-11-20 07:23:20.199599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.766 [2024-11-20 07:23:20.199630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.766 qpair failed and we were unable to recover it. 00:27:15.766 [2024-11-20 07:23:20.199971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.766 [2024-11-20 07:23:20.200006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.766 qpair failed and we were unable to recover it. 00:27:15.766 [2024-11-20 07:23:20.200224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.766 [2024-11-20 07:23:20.200257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.766 qpair failed and we were unable to recover it. 00:27:15.766 [2024-11-20 07:23:20.200544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.766 [2024-11-20 07:23:20.200575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.766 qpair failed and we were unable to recover it. 00:27:15.766 [2024-11-20 07:23:20.200765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.766 [2024-11-20 07:23:20.200796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.766 qpair failed and we were unable to recover it. 00:27:15.766 [2024-11-20 07:23:20.201012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.766 [2024-11-20 07:23:20.201047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.766 qpair failed and we were unable to recover it. 00:27:15.766 [2024-11-20 07:23:20.201311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.766 [2024-11-20 07:23:20.201342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.766 qpair failed and we were unable to recover it. 00:27:15.766 [2024-11-20 07:23:20.201656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.766 [2024-11-20 07:23:20.201686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.766 qpair failed and we were unable to recover it. 00:27:15.766 [2024-11-20 07:23:20.201985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.766 [2024-11-20 07:23:20.202018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.766 qpair failed and we were unable to recover it. 00:27:15.766 [2024-11-20 07:23:20.202275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.766 [2024-11-20 07:23:20.202307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.766 qpair failed and we were unable to recover it. 00:27:15.766 [2024-11-20 07:23:20.202501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.766 [2024-11-20 07:23:20.202532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.766 qpair failed and we were unable to recover it. 00:27:15.766 [2024-11-20 07:23:20.202715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.766 [2024-11-20 07:23:20.202758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.766 qpair failed and we were unable to recover it. 00:27:15.766 [2024-11-20 07:23:20.202984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.766 [2024-11-20 07:23:20.203017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.766 qpair failed and we were unable to recover it. 00:27:15.766 [2024-11-20 07:23:20.203224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.766 [2024-11-20 07:23:20.203257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.766 qpair failed and we were unable to recover it. 00:27:15.766 [2024-11-20 07:23:20.203530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.766 [2024-11-20 07:23:20.203561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.766 qpair failed and we were unable to recover it. 00:27:15.766 [2024-11-20 07:23:20.203755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.766 [2024-11-20 07:23:20.203787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.766 qpair failed and we were unable to recover it. 00:27:15.766 [2024-11-20 07:23:20.204035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.766 [2024-11-20 07:23:20.204069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.767 qpair failed and we were unable to recover it. 00:27:15.767 [2024-11-20 07:23:20.204337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.767 [2024-11-20 07:23:20.204368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.767 qpair failed and we were unable to recover it. 00:27:15.767 [2024-11-20 07:23:20.204661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.767 [2024-11-20 07:23:20.204692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.767 qpair failed and we were unable to recover it. 00:27:15.767 [2024-11-20 07:23:20.204933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.767 [2024-11-20 07:23:20.204973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.767 qpair failed and we were unable to recover it. 00:27:15.767 [2024-11-20 07:23:20.205110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.767 [2024-11-20 07:23:20.205141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.767 qpair failed and we were unable to recover it. 00:27:15.767 [2024-11-20 07:23:20.205335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.767 [2024-11-20 07:23:20.205367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.767 qpair failed and we were unable to recover it. 00:27:15.767 [2024-11-20 07:23:20.205502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.767 [2024-11-20 07:23:20.205533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.767 qpair failed and we were unable to recover it. 00:27:15.767 [2024-11-20 07:23:20.205751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.767 [2024-11-20 07:23:20.205783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.767 qpair failed and we were unable to recover it. 00:27:15.767 [2024-11-20 07:23:20.205980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.767 [2024-11-20 07:23:20.206018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.767 qpair failed and we were unable to recover it. 00:27:15.767 [2024-11-20 07:23:20.206152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.767 [2024-11-20 07:23:20.206182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.767 qpair failed and we were unable to recover it. 00:27:15.767 [2024-11-20 07:23:20.206378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.767 [2024-11-20 07:23:20.206410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.767 qpair failed and we were unable to recover it. 00:27:15.767 [2024-11-20 07:23:20.206680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.767 [2024-11-20 07:23:20.206711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.767 qpair failed and we were unable to recover it. 00:27:15.767 [2024-11-20 07:23:20.206963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.767 [2024-11-20 07:23:20.206995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.767 qpair failed and we were unable to recover it. 00:27:15.767 [2024-11-20 07:23:20.207290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.767 [2024-11-20 07:23:20.207323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.767 qpair failed and we were unable to recover it. 00:27:15.767 [2024-11-20 07:23:20.207654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.767 [2024-11-20 07:23:20.207685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.767 qpair failed and we were unable to recover it. 00:27:15.767 [2024-11-20 07:23:20.207873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.767 [2024-11-20 07:23:20.207905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.767 qpair failed and we were unable to recover it. 00:27:15.767 [2024-11-20 07:23:20.208129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.767 [2024-11-20 07:23:20.208161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.767 qpair failed and we were unable to recover it. 00:27:15.767 [2024-11-20 07:23:20.208302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.767 [2024-11-20 07:23:20.208334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.767 qpair failed and we were unable to recover it. 00:27:15.767 [2024-11-20 07:23:20.208582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.767 [2024-11-20 07:23:20.208613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.767 qpair failed and we were unable to recover it. 00:27:15.767 [2024-11-20 07:23:20.208794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.767 [2024-11-20 07:23:20.208826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.767 qpair failed and we were unable to recover it. 00:27:15.767 [2024-11-20 07:23:20.209053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.767 [2024-11-20 07:23:20.209085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.767 qpair failed and we were unable to recover it. 00:27:15.767 [2024-11-20 07:23:20.209337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.767 [2024-11-20 07:23:20.209370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.767 qpair failed and we were unable to recover it. 00:27:15.767 [2024-11-20 07:23:20.209638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.767 [2024-11-20 07:23:20.209670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.767 qpair failed and we were unable to recover it. 00:27:15.767 [2024-11-20 07:23:20.209963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.767 [2024-11-20 07:23:20.209996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.767 qpair failed and we were unable to recover it. 00:27:15.767 [2024-11-20 07:23:20.210265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.767 [2024-11-20 07:23:20.210296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.767 qpair failed and we were unable to recover it. 00:27:15.767 [2024-11-20 07:23:20.210492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.767 [2024-11-20 07:23:20.210523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.767 qpair failed and we were unable to recover it. 00:27:15.767 [2024-11-20 07:23:20.210781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.767 [2024-11-20 07:23:20.210813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.767 qpair failed and we were unable to recover it. 00:27:15.767 [2024-11-20 07:23:20.211009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.767 [2024-11-20 07:23:20.211042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.767 qpair failed and we were unable to recover it. 00:27:15.768 [2024-11-20 07:23:20.211309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-11-20 07:23:20.211341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.768 qpair failed and we were unable to recover it. 00:27:15.768 [2024-11-20 07:23:20.211532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-11-20 07:23:20.211563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.768 qpair failed and we were unable to recover it. 00:27:15.768 [2024-11-20 07:23:20.211828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-11-20 07:23:20.211860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.768 qpair failed and we were unable to recover it. 00:27:15.768 [2024-11-20 07:23:20.212070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-11-20 07:23:20.212103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.768 qpair failed and we were unable to recover it. 00:27:15.768 [2024-11-20 07:23:20.212244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-11-20 07:23:20.212277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.768 qpair failed and we were unable to recover it. 00:27:15.768 [2024-11-20 07:23:20.212452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-11-20 07:23:20.212483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.768 qpair failed and we were unable to recover it. 00:27:15.768 [2024-11-20 07:23:20.212683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-11-20 07:23:20.212714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.768 qpair failed and we were unable to recover it. 00:27:15.768 [2024-11-20 07:23:20.212903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-11-20 07:23:20.212938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.768 qpair failed and we were unable to recover it. 00:27:15.768 [2024-11-20 07:23:20.213137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-11-20 07:23:20.213171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.768 qpair failed and we were unable to recover it. 00:27:15.768 [2024-11-20 07:23:20.213451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-11-20 07:23:20.213487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.768 qpair failed and we were unable to recover it. 00:27:15.768 [2024-11-20 07:23:20.213677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-11-20 07:23:20.213708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.768 qpair failed and we were unable to recover it. 00:27:15.768 [2024-11-20 07:23:20.213969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-11-20 07:23:20.214002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.768 qpair failed and we were unable to recover it. 00:27:15.768 [2024-11-20 07:23:20.214130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-11-20 07:23:20.214161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.768 qpair failed and we were unable to recover it. 00:27:15.768 [2024-11-20 07:23:20.214378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-11-20 07:23:20.214411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.768 qpair failed and we were unable to recover it. 00:27:15.768 [2024-11-20 07:23:20.214679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-11-20 07:23:20.214712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.768 qpair failed and we were unable to recover it. 00:27:15.768 [2024-11-20 07:23:20.214965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-11-20 07:23:20.214999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.768 qpair failed and we were unable to recover it. 00:27:15.768 [2024-11-20 07:23:20.215203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-11-20 07:23:20.215235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.768 qpair failed and we were unable to recover it. 00:27:15.768 [2024-11-20 07:23:20.215431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-11-20 07:23:20.215463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.768 qpair failed and we were unable to recover it. 00:27:15.768 [2024-11-20 07:23:20.215769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-11-20 07:23:20.215799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.768 qpair failed and we were unable to recover it. 00:27:15.768 [2024-11-20 07:23:20.216076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-11-20 07:23:20.216109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.768 qpair failed and we were unable to recover it. 00:27:15.768 [2024-11-20 07:23:20.216295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-11-20 07:23:20.216334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.768 qpair failed and we were unable to recover it. 00:27:15.768 [2024-11-20 07:23:20.216528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-11-20 07:23:20.216560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.768 qpair failed and we were unable to recover it. 00:27:15.768 [2024-11-20 07:23:20.216758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-11-20 07:23:20.216789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.768 qpair failed and we were unable to recover it. 00:27:15.768 [2024-11-20 07:23:20.217081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-11-20 07:23:20.217114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.768 qpair failed and we were unable to recover it. 00:27:15.768 [2024-11-20 07:23:20.217326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-11-20 07:23:20.217358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.768 qpair failed and we were unable to recover it. 00:27:15.768 [2024-11-20 07:23:20.217576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-11-20 07:23:20.217607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.768 qpair failed and we were unable to recover it. 00:27:15.768 [2024-11-20 07:23:20.217792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-11-20 07:23:20.217824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.768 qpair failed and we were unable to recover it. 00:27:15.768 [2024-11-20 07:23:20.218067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-11-20 07:23:20.218100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.768 qpair failed and we were unable to recover it. 00:27:15.768 [2024-11-20 07:23:20.218363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-11-20 07:23:20.218395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.768 qpair failed and we were unable to recover it. 00:27:15.768 [2024-11-20 07:23:20.218716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-11-20 07:23:20.218747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.768 qpair failed and we were unable to recover it. 00:27:15.768 [2024-11-20 07:23:20.219012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.768 [2024-11-20 07:23:20.219045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.768 qpair failed and we were unable to recover it. 00:27:15.768 [2024-11-20 07:23:20.219287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-11-20 07:23:20.219318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.769 qpair failed and we were unable to recover it. 00:27:15.769 [2024-11-20 07:23:20.219561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-11-20 07:23:20.219591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.769 qpair failed and we were unable to recover it. 00:27:15.769 [2024-11-20 07:23:20.219852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-11-20 07:23:20.219884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.769 qpair failed and we were unable to recover it. 00:27:15.769 [2024-11-20 07:23:20.220105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-11-20 07:23:20.220138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.769 qpair failed and we were unable to recover it. 00:27:15.769 [2024-11-20 07:23:20.220334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-11-20 07:23:20.220366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.769 qpair failed and we were unable to recover it. 00:27:15.769 [2024-11-20 07:23:20.220552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-11-20 07:23:20.220583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.769 qpair failed and we were unable to recover it. 00:27:15.769 [2024-11-20 07:23:20.220785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-11-20 07:23:20.220817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.769 qpair failed and we were unable to recover it. 00:27:15.769 [2024-11-20 07:23:20.221020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-11-20 07:23:20.221053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.769 qpair failed and we were unable to recover it. 00:27:15.769 [2024-11-20 07:23:20.221188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-11-20 07:23:20.221219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.769 qpair failed and we were unable to recover it. 00:27:15.769 [2024-11-20 07:23:20.221408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-11-20 07:23:20.221438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.769 qpair failed and we were unable to recover it. 00:27:15.769 [2024-11-20 07:23:20.221682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-11-20 07:23:20.221713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.769 qpair failed and we were unable to recover it. 00:27:15.769 [2024-11-20 07:23:20.221960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-11-20 07:23:20.221994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.769 qpair failed and we were unable to recover it. 00:27:15.769 [2024-11-20 07:23:20.222186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-11-20 07:23:20.222217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.769 qpair failed and we were unable to recover it. 00:27:15.769 [2024-11-20 07:23:20.222407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-11-20 07:23:20.222439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.769 qpair failed and we were unable to recover it. 00:27:15.769 [2024-11-20 07:23:20.222575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-11-20 07:23:20.222607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.769 qpair failed and we were unable to recover it. 00:27:15.769 [2024-11-20 07:23:20.222779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-11-20 07:23:20.222810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.769 qpair failed and we were unable to recover it. 00:27:15.769 [2024-11-20 07:23:20.223072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-11-20 07:23:20.223104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.769 qpair failed and we were unable to recover it. 00:27:15.769 [2024-11-20 07:23:20.223296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-11-20 07:23:20.223329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.769 qpair failed and we were unable to recover it. 00:27:15.769 [2024-11-20 07:23:20.223587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-11-20 07:23:20.223619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.769 qpair failed and we were unable to recover it. 00:27:15.769 [2024-11-20 07:23:20.223890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-11-20 07:23:20.223922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.769 qpair failed and we were unable to recover it. 00:27:15.769 [2024-11-20 07:23:20.224158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-11-20 07:23:20.224191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.769 qpair failed and we were unable to recover it. 00:27:15.769 [2024-11-20 07:23:20.224388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-11-20 07:23:20.224421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.769 qpair failed and we were unable to recover it. 00:27:15.769 [2024-11-20 07:23:20.224616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-11-20 07:23:20.224648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.769 qpair failed and we were unable to recover it. 00:27:15.769 [2024-11-20 07:23:20.224929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-11-20 07:23:20.224968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.769 qpair failed and we were unable to recover it. 00:27:15.769 [2024-11-20 07:23:20.225212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-11-20 07:23:20.225245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.769 qpair failed and we were unable to recover it. 00:27:15.769 [2024-11-20 07:23:20.225385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-11-20 07:23:20.225415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.769 qpair failed and we were unable to recover it. 00:27:15.769 [2024-11-20 07:23:20.225699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-11-20 07:23:20.225731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.769 qpair failed and we were unable to recover it. 00:27:15.769 [2024-11-20 07:23:20.225981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-11-20 07:23:20.226015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.769 qpair failed and we were unable to recover it. 00:27:15.769 [2024-11-20 07:23:20.226260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-11-20 07:23:20.226291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.769 qpair failed and we were unable to recover it. 00:27:15.769 [2024-11-20 07:23:20.226482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-11-20 07:23:20.226520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.769 qpair failed and we were unable to recover it. 00:27:15.769 [2024-11-20 07:23:20.226766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-11-20 07:23:20.226798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.769 qpair failed and we were unable to recover it. 00:27:15.769 [2024-11-20 07:23:20.227043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.769 [2024-11-20 07:23:20.227077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.769 qpair failed and we were unable to recover it. 00:27:15.770 [2024-11-20 07:23:20.227253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.770 [2024-11-20 07:23:20.227284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.770 qpair failed and we were unable to recover it. 00:27:15.770 [2024-11-20 07:23:20.227553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.770 [2024-11-20 07:23:20.227584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.770 qpair failed and we were unable to recover it. 00:27:15.770 [2024-11-20 07:23:20.227769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.770 [2024-11-20 07:23:20.227801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.770 qpair failed and we were unable to recover it. 00:27:15.770 [2024-11-20 07:23:20.228072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.770 [2024-11-20 07:23:20.228105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.770 qpair failed and we were unable to recover it. 00:27:15.770 [2024-11-20 07:23:20.228349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.770 [2024-11-20 07:23:20.228381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.770 qpair failed and we were unable to recover it. 00:27:15.770 [2024-11-20 07:23:20.228651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.770 [2024-11-20 07:23:20.228683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.770 qpair failed and we were unable to recover it. 00:27:15.770 [2024-11-20 07:23:20.228961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.770 [2024-11-20 07:23:20.228995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.770 qpair failed and we were unable to recover it. 00:27:15.770 [2024-11-20 07:23:20.229284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.770 [2024-11-20 07:23:20.229316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.770 qpair failed and we were unable to recover it. 00:27:15.770 [2024-11-20 07:23:20.229523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.770 [2024-11-20 07:23:20.229555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.770 qpair failed and we were unable to recover it. 00:27:15.770 [2024-11-20 07:23:20.229748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.770 [2024-11-20 07:23:20.229780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.770 qpair failed and we were unable to recover it. 00:27:15.770 [2024-11-20 07:23:20.230070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.770 [2024-11-20 07:23:20.230102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.770 qpair failed and we were unable to recover it. 00:27:15.770 [2024-11-20 07:23:20.230369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.770 [2024-11-20 07:23:20.230401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.770 qpair failed and we were unable to recover it. 00:27:15.770 [2024-11-20 07:23:20.230685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.770 [2024-11-20 07:23:20.230719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.770 qpair failed and we were unable to recover it. 00:27:15.770 [2024-11-20 07:23:20.230962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.770 [2024-11-20 07:23:20.230996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.770 qpair failed and we were unable to recover it. 00:27:15.770 [2024-11-20 07:23:20.231140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.770 [2024-11-20 07:23:20.231172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.770 qpair failed and we were unable to recover it. 00:27:15.770 [2024-11-20 07:23:20.231445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.770 [2024-11-20 07:23:20.231477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.770 qpair failed and we were unable to recover it. 00:27:15.770 [2024-11-20 07:23:20.231624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.770 [2024-11-20 07:23:20.231656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.770 qpair failed and we were unable to recover it. 00:27:15.770 [2024-11-20 07:23:20.231867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.770 [2024-11-20 07:23:20.231898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.770 qpair failed and we were unable to recover it. 00:27:15.770 [2024-11-20 07:23:20.232056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.770 [2024-11-20 07:23:20.232088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.770 qpair failed and we were unable to recover it. 00:27:15.770 [2024-11-20 07:23:20.232364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.770 [2024-11-20 07:23:20.232396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.770 qpair failed and we were unable to recover it. 00:27:15.770 [2024-11-20 07:23:20.232583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.770 [2024-11-20 07:23:20.232613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.770 qpair failed and we were unable to recover it. 00:27:15.770 [2024-11-20 07:23:20.232876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.770 [2024-11-20 07:23:20.232908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.770 qpair failed and we were unable to recover it. 00:27:15.770 [2024-11-20 07:23:20.233134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.770 [2024-11-20 07:23:20.233168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.770 qpair failed and we were unable to recover it. 00:27:15.770 [2024-11-20 07:23:20.233414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.770 [2024-11-20 07:23:20.233449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.770 qpair failed and we were unable to recover it. 00:27:15.770 [2024-11-20 07:23:20.233762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.770 [2024-11-20 07:23:20.233795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.770 qpair failed and we were unable to recover it. 00:27:15.770 [2024-11-20 07:23:20.234041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.770 [2024-11-20 07:23:20.234075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.770 qpair failed and we were unable to recover it. 00:27:15.770 [2024-11-20 07:23:20.234262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.770 [2024-11-20 07:23:20.234295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.770 qpair failed and we were unable to recover it. 00:27:15.770 [2024-11-20 07:23:20.234475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.770 [2024-11-20 07:23:20.234508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.770 qpair failed and we were unable to recover it. 00:27:15.770 [2024-11-20 07:23:20.234699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.770 [2024-11-20 07:23:20.234730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.770 qpair failed and we were unable to recover it. 00:27:15.770 [2024-11-20 07:23:20.234974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.770 [2024-11-20 07:23:20.235006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.770 qpair failed and we were unable to recover it. 00:27:15.770 [2024-11-20 07:23:20.235200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.770 [2024-11-20 07:23:20.235231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.770 qpair failed and we were unable to recover it. 00:27:15.770 [2024-11-20 07:23:20.235437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.771 [2024-11-20 07:23:20.235469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.771 qpair failed and we were unable to recover it. 00:27:15.771 [2024-11-20 07:23:20.235656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.771 [2024-11-20 07:23:20.235688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.771 qpair failed and we were unable to recover it. 00:27:15.771 [2024-11-20 07:23:20.235881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.771 [2024-11-20 07:23:20.235912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.771 qpair failed and we were unable to recover it. 00:27:15.771 [2024-11-20 07:23:20.236139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.771 [2024-11-20 07:23:20.236173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.771 qpair failed and we were unable to recover it. 00:27:15.771 [2024-11-20 07:23:20.236424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.771 [2024-11-20 07:23:20.236457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.771 qpair failed and we were unable to recover it. 00:27:15.771 [2024-11-20 07:23:20.236713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.771 [2024-11-20 07:23:20.236744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.771 qpair failed and we were unable to recover it. 00:27:15.771 [2024-11-20 07:23:20.236938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.771 [2024-11-20 07:23:20.236999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.771 qpair failed and we were unable to recover it. 00:27:15.771 [2024-11-20 07:23:20.237246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.771 [2024-11-20 07:23:20.237279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.771 qpair failed and we were unable to recover it. 00:27:15.771 [2024-11-20 07:23:20.237474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.771 [2024-11-20 07:23:20.237506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.771 qpair failed and we were unable to recover it. 00:27:15.771 [2024-11-20 07:23:20.237719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.771 [2024-11-20 07:23:20.237752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.771 qpair failed and we were unable to recover it. 00:27:15.771 [2024-11-20 07:23:20.237943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.771 [2024-11-20 07:23:20.237988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.771 qpair failed and we were unable to recover it. 00:27:15.771 [2024-11-20 07:23:20.238139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.771 [2024-11-20 07:23:20.238170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.771 qpair failed and we were unable to recover it. 00:27:15.771 [2024-11-20 07:23:20.238373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.771 [2024-11-20 07:23:20.238406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.771 qpair failed and we were unable to recover it. 00:27:15.771 [2024-11-20 07:23:20.238598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.771 [2024-11-20 07:23:20.238629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.771 qpair failed and we were unable to recover it. 00:27:15.771 [2024-11-20 07:23:20.238762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.771 [2024-11-20 07:23:20.238793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.771 qpair failed and we were unable to recover it. 00:27:15.771 [2024-11-20 07:23:20.239006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.771 [2024-11-20 07:23:20.239041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.771 qpair failed and we were unable to recover it. 00:27:15.771 [2024-11-20 07:23:20.239289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.771 [2024-11-20 07:23:20.239320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.771 qpair failed and we were unable to recover it. 00:27:15.771 [2024-11-20 07:23:20.239586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.771 [2024-11-20 07:23:20.239619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.771 qpair failed and we were unable to recover it. 00:27:15.771 [2024-11-20 07:23:20.239857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.771 [2024-11-20 07:23:20.239890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.771 qpair failed and we were unable to recover it. 00:27:15.771 [2024-11-20 07:23:20.240205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.771 [2024-11-20 07:23:20.240239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.771 qpair failed and we were unable to recover it. 00:27:15.771 [2024-11-20 07:23:20.240498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.771 [2024-11-20 07:23:20.240531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.771 qpair failed and we were unable to recover it. 00:27:15.771 [2024-11-20 07:23:20.240755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.771 [2024-11-20 07:23:20.240787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.771 qpair failed and we were unable to recover it. 00:27:15.771 [2024-11-20 07:23:20.241058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.771 [2024-11-20 07:23:20.241091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.771 qpair failed and we were unable to recover it. 00:27:15.771 [2024-11-20 07:23:20.241281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.771 [2024-11-20 07:23:20.241314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.771 qpair failed and we were unable to recover it. 00:27:15.771 [2024-11-20 07:23:20.241508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.771 [2024-11-20 07:23:20.241539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.771 qpair failed and we were unable to recover it. 00:27:15.771 [2024-11-20 07:23:20.241824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.771 [2024-11-20 07:23:20.241856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.771 qpair failed and we were unable to recover it. 00:27:15.771 [2024-11-20 07:23:20.242125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.771 [2024-11-20 07:23:20.242158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.771 qpair failed and we were unable to recover it. 00:27:15.771 [2024-11-20 07:23:20.242448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.771 [2024-11-20 07:23:20.242480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.771 qpair failed and we were unable to recover it. 00:27:15.771 [2024-11-20 07:23:20.242624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.771 [2024-11-20 07:23:20.242656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.771 qpair failed and we were unable to recover it. 00:27:15.771 [2024-11-20 07:23:20.242848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.771 [2024-11-20 07:23:20.242880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.771 qpair failed and we were unable to recover it. 00:27:15.771 [2024-11-20 07:23:20.243158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.771 [2024-11-20 07:23:20.243192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.771 qpair failed and we were unable to recover it. 00:27:15.771 [2024-11-20 07:23:20.243474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.772 [2024-11-20 07:23:20.243509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.772 qpair failed and we were unable to recover it. 00:27:15.772 [2024-11-20 07:23:20.243768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.772 [2024-11-20 07:23:20.243801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.772 qpair failed and we were unable to recover it. 00:27:15.772 [2024-11-20 07:23:20.244097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.772 [2024-11-20 07:23:20.244132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.772 qpair failed and we were unable to recover it. 00:27:15.772 [2024-11-20 07:23:20.244377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.772 [2024-11-20 07:23:20.244410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.772 qpair failed and we were unable to recover it. 00:27:15.772 [2024-11-20 07:23:20.244609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.772 [2024-11-20 07:23:20.244642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.772 qpair failed and we were unable to recover it. 00:27:15.772 [2024-11-20 07:23:20.244828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.772 [2024-11-20 07:23:20.244861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.772 qpair failed and we were unable to recover it. 00:27:15.772 [2024-11-20 07:23:20.245131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.772 [2024-11-20 07:23:20.245166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.772 qpair failed and we were unable to recover it. 00:27:15.772 [2024-11-20 07:23:20.245386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.772 [2024-11-20 07:23:20.245418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.772 qpair failed and we were unable to recover it. 00:27:15.772 [2024-11-20 07:23:20.245662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.772 [2024-11-20 07:23:20.245695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.772 qpair failed and we were unable to recover it. 00:27:15.772 [2024-11-20 07:23:20.245967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.772 [2024-11-20 07:23:20.246000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.772 qpair failed and we were unable to recover it. 00:27:15.772 [2024-11-20 07:23:20.246290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.772 [2024-11-20 07:23:20.246322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.772 qpair failed and we were unable to recover it. 00:27:15.772 [2024-11-20 07:23:20.246594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.772 [2024-11-20 07:23:20.246627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.772 qpair failed and we were unable to recover it. 00:27:15.772 [2024-11-20 07:23:20.246805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.772 [2024-11-20 07:23:20.246837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.772 qpair failed and we were unable to recover it. 00:27:15.772 [2024-11-20 07:23:20.247081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.772 [2024-11-20 07:23:20.247114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.772 qpair failed and we were unable to recover it. 00:27:15.772 [2024-11-20 07:23:20.247410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.772 [2024-11-20 07:23:20.247441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.772 qpair failed and we were unable to recover it. 00:27:15.772 [2024-11-20 07:23:20.247707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.772 [2024-11-20 07:23:20.247746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.772 qpair failed and we were unable to recover it. 00:27:15.772 [2024-11-20 07:23:20.248023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.772 [2024-11-20 07:23:20.248055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.772 qpair failed and we were unable to recover it. 00:27:15.772 [2024-11-20 07:23:20.248335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.772 [2024-11-20 07:23:20.248367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.772 qpair failed and we were unable to recover it. 00:27:15.772 [2024-11-20 07:23:20.248650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.772 [2024-11-20 07:23:20.248683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.772 qpair failed and we were unable to recover it. 00:27:15.772 [2024-11-20 07:23:20.248969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.772 [2024-11-20 07:23:20.249002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.772 qpair failed and we were unable to recover it. 00:27:15.772 [2024-11-20 07:23:20.249274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.772 [2024-11-20 07:23:20.249305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.772 qpair failed and we were unable to recover it. 00:27:15.772 [2024-11-20 07:23:20.249587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.772 [2024-11-20 07:23:20.249620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.772 qpair failed and we were unable to recover it. 00:27:15.772 [2024-11-20 07:23:20.249902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.772 [2024-11-20 07:23:20.249933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.772 qpair failed and we were unable to recover it. 00:27:15.772 [2024-11-20 07:23:20.250194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.772 [2024-11-20 07:23:20.250227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.772 qpair failed and we were unable to recover it. 00:27:15.772 [2024-11-20 07:23:20.250521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.772 [2024-11-20 07:23:20.250553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.772 qpair failed and we were unable to recover it. 00:27:15.772 [2024-11-20 07:23:20.250753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.772 [2024-11-20 07:23:20.250785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.772 qpair failed and we were unable to recover it. 00:27:15.772 [2024-11-20 07:23:20.251080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.772 [2024-11-20 07:23:20.251113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.772 qpair failed and we were unable to recover it. 00:27:15.772 [2024-11-20 07:23:20.251370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.772 [2024-11-20 07:23:20.251403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.772 qpair failed and we were unable to recover it. 00:27:15.772 [2024-11-20 07:23:20.251707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.772 [2024-11-20 07:23:20.251739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.772 qpair failed and we were unable to recover it. 00:27:15.772 [2024-11-20 07:23:20.252005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.772 [2024-11-20 07:23:20.252038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.772 qpair failed and we were unable to recover it. 00:27:15.772 [2024-11-20 07:23:20.252223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.772 [2024-11-20 07:23:20.252255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.772 qpair failed and we were unable to recover it. 00:27:15.772 [2024-11-20 07:23:20.252404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.772 [2024-11-20 07:23:20.252436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.772 qpair failed and we were unable to recover it. 00:27:15.773 [2024-11-20 07:23:20.252628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.773 [2024-11-20 07:23:20.252661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.773 qpair failed and we were unable to recover it. 00:27:15.773 [2024-11-20 07:23:20.252930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.773 [2024-11-20 07:23:20.252971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.773 qpair failed and we were unable to recover it. 00:27:15.773 [2024-11-20 07:23:20.253218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.773 [2024-11-20 07:23:20.253251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.773 qpair failed and we were unable to recover it. 00:27:15.773 [2024-11-20 07:23:20.253374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.773 [2024-11-20 07:23:20.253413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.773 qpair failed and we were unable to recover it. 00:27:15.773 [2024-11-20 07:23:20.253671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.773 [2024-11-20 07:23:20.253702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.773 qpair failed and we were unable to recover it. 00:27:15.773 [2024-11-20 07:23:20.253900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.773 [2024-11-20 07:23:20.253932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.773 qpair failed and we were unable to recover it. 00:27:15.773 [2024-11-20 07:23:20.254121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.773 [2024-11-20 07:23:20.254154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.773 qpair failed and we were unable to recover it. 00:27:15.773 [2024-11-20 07:23:20.254427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.773 [2024-11-20 07:23:20.254459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.773 qpair failed and we were unable to recover it. 00:27:15.773 [2024-11-20 07:23:20.254730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.773 [2024-11-20 07:23:20.254763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.773 qpair failed and we were unable to recover it. 00:27:15.773 [2024-11-20 07:23:20.254981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.773 [2024-11-20 07:23:20.255014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.773 qpair failed and we were unable to recover it. 00:27:15.773 [2024-11-20 07:23:20.255288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.773 [2024-11-20 07:23:20.255326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.773 qpair failed and we were unable to recover it. 00:27:15.773 [2024-11-20 07:23:20.255606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.773 [2024-11-20 07:23:20.255638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.773 qpair failed and we were unable to recover it. 00:27:15.773 [2024-11-20 07:23:20.255838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.773 [2024-11-20 07:23:20.255871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.773 qpair failed and we were unable to recover it. 00:27:15.773 [2024-11-20 07:23:20.256114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.773 [2024-11-20 07:23:20.256148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.773 qpair failed and we were unable to recover it. 00:27:15.773 [2024-11-20 07:23:20.256352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.773 [2024-11-20 07:23:20.256385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.773 qpair failed and we were unable to recover it. 00:27:15.773 [2024-11-20 07:23:20.256660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.773 [2024-11-20 07:23:20.256691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.773 qpair failed and we were unable to recover it. 00:27:15.773 [2024-11-20 07:23:20.256969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.773 [2024-11-20 07:23:20.257003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.773 qpair failed and we were unable to recover it. 00:27:15.773 [2024-11-20 07:23:20.257286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.773 [2024-11-20 07:23:20.257319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.773 qpair failed and we were unable to recover it. 00:27:15.773 [2024-11-20 07:23:20.257590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.773 [2024-11-20 07:23:20.257622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.773 qpair failed and we were unable to recover it. 00:27:15.773 [2024-11-20 07:23:20.257888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.773 [2024-11-20 07:23:20.257920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.773 qpair failed and we were unable to recover it. 00:27:15.773 [2024-11-20 07:23:20.258221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.773 [2024-11-20 07:23:20.258255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.773 qpair failed and we were unable to recover it. 00:27:15.773 [2024-11-20 07:23:20.258518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.773 [2024-11-20 07:23:20.258549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.773 qpair failed and we were unable to recover it. 00:27:15.773 [2024-11-20 07:23:20.258796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.773 [2024-11-20 07:23:20.258829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.773 qpair failed and we were unable to recover it. 00:27:15.773 [2024-11-20 07:23:20.259030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.773 [2024-11-20 07:23:20.259064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.773 qpair failed and we were unable to recover it. 00:27:15.773 [2024-11-20 07:23:20.259338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.773 [2024-11-20 07:23:20.259371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.773 qpair failed and we were unable to recover it. 00:27:15.773 [2024-11-20 07:23:20.259648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.773 [2024-11-20 07:23:20.259680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.773 qpair failed and we were unable to recover it. 00:27:15.773 [2024-11-20 07:23:20.259808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.773 [2024-11-20 07:23:20.259841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.773 qpair failed and we were unable to recover it. 00:27:15.773 [2024-11-20 07:23:20.260090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.773 [2024-11-20 07:23:20.260123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.773 qpair failed and we were unable to recover it. 00:27:15.774 [2024-11-20 07:23:20.260365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.774 [2024-11-20 07:23:20.260397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.774 qpair failed and we were unable to recover it. 00:27:15.774 [2024-11-20 07:23:20.260668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.774 [2024-11-20 07:23:20.260702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.774 qpair failed and we were unable to recover it. 00:27:15.774 [2024-11-20 07:23:20.260996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.774 [2024-11-20 07:23:20.261029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.774 qpair failed and we were unable to recover it. 00:27:15.774 [2024-11-20 07:23:20.261301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.774 [2024-11-20 07:23:20.261333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.774 qpair failed and we were unable to recover it. 00:27:15.774 [2024-11-20 07:23:20.261482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.774 [2024-11-20 07:23:20.261515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.774 qpair failed and we were unable to recover it. 00:27:15.774 [2024-11-20 07:23:20.261834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.774 [2024-11-20 07:23:20.261866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.774 qpair failed and we were unable to recover it. 00:27:15.774 [2024-11-20 07:23:20.262140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.774 [2024-11-20 07:23:20.262174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.774 qpair failed and we were unable to recover it. 00:27:15.774 [2024-11-20 07:23:20.262365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.774 [2024-11-20 07:23:20.262397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.774 qpair failed and we were unable to recover it. 00:27:15.774 [2024-11-20 07:23:20.262604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.774 [2024-11-20 07:23:20.262635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.774 qpair failed and we were unable to recover it. 00:27:15.774 [2024-11-20 07:23:20.262834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.774 [2024-11-20 07:23:20.262867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.774 qpair failed and we were unable to recover it. 00:27:15.774 [2024-11-20 07:23:20.263118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.774 [2024-11-20 07:23:20.263152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.774 qpair failed and we were unable to recover it. 00:27:15.774 [2024-11-20 07:23:20.263429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.774 [2024-11-20 07:23:20.263461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.774 qpair failed and we were unable to recover it. 00:27:15.774 [2024-11-20 07:23:20.263750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.774 [2024-11-20 07:23:20.263784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.774 qpair failed and we were unable to recover it. 00:27:15.774 [2024-11-20 07:23:20.264033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.774 [2024-11-20 07:23:20.264067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.774 qpair failed and we were unable to recover it. 00:27:15.774 [2024-11-20 07:23:20.264318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.774 [2024-11-20 07:23:20.264350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.774 qpair failed and we were unable to recover it. 00:27:15.774 [2024-11-20 07:23:20.264624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.774 [2024-11-20 07:23:20.264655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.774 qpair failed and we were unable to recover it. 00:27:15.774 [2024-11-20 07:23:20.264933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.774 [2024-11-20 07:23:20.264988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.774 qpair failed and we were unable to recover it. 00:27:15.774 [2024-11-20 07:23:20.265171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.774 [2024-11-20 07:23:20.265204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.774 qpair failed and we were unable to recover it. 00:27:15.774 [2024-11-20 07:23:20.265499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.774 [2024-11-20 07:23:20.265532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.774 qpair failed and we were unable to recover it. 00:27:15.774 [2024-11-20 07:23:20.265717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.774 [2024-11-20 07:23:20.265747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.774 qpair failed and we were unable to recover it. 00:27:15.774 [2024-11-20 07:23:20.265998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.774 [2024-11-20 07:23:20.266031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.774 qpair failed and we were unable to recover it. 00:27:15.774 [2024-11-20 07:23:20.266248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.774 [2024-11-20 07:23:20.266280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.774 qpair failed and we were unable to recover it. 00:27:15.774 [2024-11-20 07:23:20.266539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.774 [2024-11-20 07:23:20.266578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.774 qpair failed and we were unable to recover it. 00:27:15.774 [2024-11-20 07:23:20.266868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.774 [2024-11-20 07:23:20.266901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.774 qpair failed and we were unable to recover it. 00:27:15.774 [2024-11-20 07:23:20.267047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.774 [2024-11-20 07:23:20.267079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.774 qpair failed and we were unable to recover it. 00:27:15.774 [2024-11-20 07:23:20.267205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.774 [2024-11-20 07:23:20.267236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.774 qpair failed and we were unable to recover it. 00:27:15.774 [2024-11-20 07:23:20.267343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.774 [2024-11-20 07:23:20.267373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.774 qpair failed and we were unable to recover it. 00:27:15.774 [2024-11-20 07:23:20.267621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.774 [2024-11-20 07:23:20.267654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.774 qpair failed and we were unable to recover it. 00:27:15.774 [2024-11-20 07:23:20.267964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.774 [2024-11-20 07:23:20.267999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.774 qpair failed and we were unable to recover it. 00:27:15.774 [2024-11-20 07:23:20.268281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.774 [2024-11-20 07:23:20.268313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.774 qpair failed and we were unable to recover it. 00:27:15.774 [2024-11-20 07:23:20.268587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.774 [2024-11-20 07:23:20.268619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.774 qpair failed and we were unable to recover it. 00:27:15.774 [2024-11-20 07:23:20.268913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.775 [2024-11-20 07:23:20.268945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.775 qpair failed and we were unable to recover it. 00:27:15.775 [2024-11-20 07:23:20.269106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.775 [2024-11-20 07:23:20.269137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.775 qpair failed and we were unable to recover it. 00:27:15.775 [2024-11-20 07:23:20.269432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.775 [2024-11-20 07:23:20.269463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.775 qpair failed and we were unable to recover it. 00:27:15.775 [2024-11-20 07:23:20.269731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.775 [2024-11-20 07:23:20.269762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.775 qpair failed and we were unable to recover it. 00:27:15.775 [2024-11-20 07:23:20.269986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.775 [2024-11-20 07:23:20.270020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.775 qpair failed and we were unable to recover it. 00:27:15.775 [2024-11-20 07:23:20.270224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.775 [2024-11-20 07:23:20.270256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.775 qpair failed and we were unable to recover it. 00:27:15.775 [2024-11-20 07:23:20.270526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.775 [2024-11-20 07:23:20.270557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.775 qpair failed and we were unable to recover it. 00:27:15.775 [2024-11-20 07:23:20.270700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.775 [2024-11-20 07:23:20.270732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.775 qpair failed and we were unable to recover it. 00:27:15.775 [2024-11-20 07:23:20.271032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.775 [2024-11-20 07:23:20.271065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.775 qpair failed and we were unable to recover it. 00:27:15.775 [2024-11-20 07:23:20.271359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.775 [2024-11-20 07:23:20.271391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.775 qpair failed and we were unable to recover it. 00:27:15.775 [2024-11-20 07:23:20.271593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.775 [2024-11-20 07:23:20.271624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.775 qpair failed and we were unable to recover it. 00:27:15.775 [2024-11-20 07:23:20.271749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.775 [2024-11-20 07:23:20.271781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.775 qpair failed and we were unable to recover it. 00:27:15.775 [2024-11-20 07:23:20.271917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.775 [2024-11-20 07:23:20.271963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.775 qpair failed and we were unable to recover it. 00:27:15.775 [2024-11-20 07:23:20.272219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.775 [2024-11-20 07:23:20.272251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.775 qpair failed and we were unable to recover it. 00:27:15.775 [2024-11-20 07:23:20.272527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.775 [2024-11-20 07:23:20.272558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.775 qpair failed and we were unable to recover it. 00:27:15.775 [2024-11-20 07:23:20.272837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.775 [2024-11-20 07:23:20.272869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.775 qpair failed and we were unable to recover it. 00:27:15.775 [2024-11-20 07:23:20.273162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.775 [2024-11-20 07:23:20.273195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.775 qpair failed and we were unable to recover it. 00:27:15.775 [2024-11-20 07:23:20.273468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.775 [2024-11-20 07:23:20.273498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.775 qpair failed and we were unable to recover it. 00:27:15.775 [2024-11-20 07:23:20.273789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.775 [2024-11-20 07:23:20.273824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.775 qpair failed and we were unable to recover it. 00:27:15.775 [2024-11-20 07:23:20.274029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.775 [2024-11-20 07:23:20.274065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.775 qpair failed and we were unable to recover it. 00:27:15.775 [2024-11-20 07:23:20.274248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.775 [2024-11-20 07:23:20.274280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.775 qpair failed and we were unable to recover it. 00:27:15.775 [2024-11-20 07:23:20.274471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.775 [2024-11-20 07:23:20.274504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.775 qpair failed and we were unable to recover it. 00:27:15.775 [2024-11-20 07:23:20.274711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.775 [2024-11-20 07:23:20.274744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.775 qpair failed and we were unable to recover it. 00:27:15.775 [2024-11-20 07:23:20.275063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.775 [2024-11-20 07:23:20.275096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.775 qpair failed and we were unable to recover it. 00:27:15.775 [2024-11-20 07:23:20.275374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.775 [2024-11-20 07:23:20.275406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.775 qpair failed and we were unable to recover it. 00:27:15.775 [2024-11-20 07:23:20.275660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.775 [2024-11-20 07:23:20.275693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.775 qpair failed and we were unable to recover it. 00:27:15.775 [2024-11-20 07:23:20.276006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.775 [2024-11-20 07:23:20.276039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.775 qpair failed and we were unable to recover it. 00:27:15.775 [2024-11-20 07:23:20.276335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.775 [2024-11-20 07:23:20.276367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.775 qpair failed and we were unable to recover it. 00:27:15.775 [2024-11-20 07:23:20.276639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.775 [2024-11-20 07:23:20.276672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.775 qpair failed and we were unable to recover it. 00:27:15.775 [2024-11-20 07:23:20.276960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.775 [2024-11-20 07:23:20.276994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.775 qpair failed and we were unable to recover it. 00:27:15.775 [2024-11-20 07:23:20.277217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.775 [2024-11-20 07:23:20.277249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.775 qpair failed and we were unable to recover it. 00:27:15.775 [2024-11-20 07:23:20.277529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.776 [2024-11-20 07:23:20.277568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.776 qpair failed and we were unable to recover it. 00:27:15.776 [2024-11-20 07:23:20.277782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.776 [2024-11-20 07:23:20.277815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.776 qpair failed and we were unable to recover it. 00:27:15.776 [2024-11-20 07:23:20.278093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.776 [2024-11-20 07:23:20.278127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.776 qpair failed and we were unable to recover it. 00:27:15.776 [2024-11-20 07:23:20.278381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.776 [2024-11-20 07:23:20.278415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.776 qpair failed and we were unable to recover it. 00:27:15.776 [2024-11-20 07:23:20.278558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.776 [2024-11-20 07:23:20.278589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.776 qpair failed and we were unable to recover it. 00:27:15.776 [2024-11-20 07:23:20.278840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.776 [2024-11-20 07:23:20.278872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.776 qpair failed and we were unable to recover it. 00:27:15.776 [2024-11-20 07:23:20.279068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.776 [2024-11-20 07:23:20.279101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.776 qpair failed and we were unable to recover it. 00:27:15.776 [2024-11-20 07:23:20.279355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.776 [2024-11-20 07:23:20.279387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.776 qpair failed and we were unable to recover it. 00:27:15.776 [2024-11-20 07:23:20.279607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.776 [2024-11-20 07:23:20.279638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.776 qpair failed and we were unable to recover it. 00:27:15.776 [2024-11-20 07:23:20.279832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.776 [2024-11-20 07:23:20.279865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.776 qpair failed and we were unable to recover it. 00:27:15.776 [2024-11-20 07:23:20.280068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.776 [2024-11-20 07:23:20.280101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.776 qpair failed and we were unable to recover it. 00:27:15.776 [2024-11-20 07:23:20.280300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.776 [2024-11-20 07:23:20.280332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.776 qpair failed and we were unable to recover it. 00:27:15.776 [2024-11-20 07:23:20.280543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.776 [2024-11-20 07:23:20.280575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.776 qpair failed and we were unable to recover it. 00:27:15.776 [2024-11-20 07:23:20.280773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.776 [2024-11-20 07:23:20.280805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.776 qpair failed and we were unable to recover it. 00:27:15.776 [2024-11-20 07:23:20.281010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.776 [2024-11-20 07:23:20.281043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.776 qpair failed and we were unable to recover it. 00:27:15.776 [2024-11-20 07:23:20.281324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.776 [2024-11-20 07:23:20.281356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.776 qpair failed and we were unable to recover it. 00:27:15.776 [2024-11-20 07:23:20.281634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.776 [2024-11-20 07:23:20.281667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.776 qpair failed and we were unable to recover it. 00:27:15.776 [2024-11-20 07:23:20.281924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.776 [2024-11-20 07:23:20.281965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.776 qpair failed and we were unable to recover it. 00:27:15.776 [2024-11-20 07:23:20.282292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.776 [2024-11-20 07:23:20.282324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.776 qpair failed and we were unable to recover it. 00:27:15.776 [2024-11-20 07:23:20.282579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.776 [2024-11-20 07:23:20.282611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.776 qpair failed and we were unable to recover it. 00:27:15.776 [2024-11-20 07:23:20.282911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.776 [2024-11-20 07:23:20.282942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.776 qpair failed and we were unable to recover it. 00:27:15.776 [2024-11-20 07:23:20.283235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.776 [2024-11-20 07:23:20.283267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.776 qpair failed and we were unable to recover it. 00:27:15.776 [2024-11-20 07:23:20.283461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.776 [2024-11-20 07:23:20.283494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.776 qpair failed and we were unable to recover it. 00:27:15.776 [2024-11-20 07:23:20.283763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.776 [2024-11-20 07:23:20.283797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.776 qpair failed and we were unable to recover it. 00:27:15.776 [2024-11-20 07:23:20.284020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.776 [2024-11-20 07:23:20.284055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.776 qpair failed and we were unable to recover it. 00:27:15.776 [2024-11-20 07:23:20.284240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.776 [2024-11-20 07:23:20.284276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.776 qpair failed and we were unable to recover it. 00:27:15.776 [2024-11-20 07:23:20.284559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.776 [2024-11-20 07:23:20.284591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.776 qpair failed and we were unable to recover it. 00:27:15.776 [2024-11-20 07:23:20.284785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.776 [2024-11-20 07:23:20.284818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.776 qpair failed and we were unable to recover it. 00:27:15.776 [2024-11-20 07:23:20.285046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.776 [2024-11-20 07:23:20.285080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.776 qpair failed and we were unable to recover it. 00:27:15.776 [2024-11-20 07:23:20.285360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.776 [2024-11-20 07:23:20.285392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.776 qpair failed and we were unable to recover it. 00:27:15.776 [2024-11-20 07:23:20.285612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.776 [2024-11-20 07:23:20.285643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.776 qpair failed and we were unable to recover it. 00:27:15.776 [2024-11-20 07:23:20.285925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.776 [2024-11-20 07:23:20.285967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.777 qpair failed and we were unable to recover it. 00:27:15.777 [2024-11-20 07:23:20.286245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.777 [2024-11-20 07:23:20.286277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.777 qpair failed and we were unable to recover it. 00:27:15.777 [2024-11-20 07:23:20.286553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.777 [2024-11-20 07:23:20.286586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.777 qpair failed and we were unable to recover it. 00:27:15.777 [2024-11-20 07:23:20.286880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.777 [2024-11-20 07:23:20.286912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.777 qpair failed and we were unable to recover it. 00:27:15.777 [2024-11-20 07:23:20.287180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.777 [2024-11-20 07:23:20.287213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.777 qpair failed and we were unable to recover it. 00:27:15.777 [2024-11-20 07:23:20.287508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.777 [2024-11-20 07:23:20.287540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.777 qpair failed and we were unable to recover it. 00:27:15.777 [2024-11-20 07:23:20.287820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.777 [2024-11-20 07:23:20.287852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.777 qpair failed and we were unable to recover it. 00:27:15.777 [2024-11-20 07:23:20.288091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.777 [2024-11-20 07:23:20.288124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.777 qpair failed and we were unable to recover it. 00:27:15.777 [2024-11-20 07:23:20.288394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.777 [2024-11-20 07:23:20.288426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.777 qpair failed and we were unable to recover it. 00:27:15.777 [2024-11-20 07:23:20.288635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.777 [2024-11-20 07:23:20.288674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.777 qpair failed and we were unable to recover it. 00:27:15.777 [2024-11-20 07:23:20.288934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.777 [2024-11-20 07:23:20.288975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.777 qpair failed and we were unable to recover it. 00:27:15.777 [2024-11-20 07:23:20.289262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.777 [2024-11-20 07:23:20.289295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.777 qpair failed and we were unable to recover it. 00:27:15.777 [2024-11-20 07:23:20.289511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.777 [2024-11-20 07:23:20.289543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.777 qpair failed and we were unable to recover it. 00:27:15.777 [2024-11-20 07:23:20.289768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.777 [2024-11-20 07:23:20.289799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.777 qpair failed and we were unable to recover it. 00:27:15.777 [2024-11-20 07:23:20.290081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.777 [2024-11-20 07:23:20.290114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.777 qpair failed and we were unable to recover it. 00:27:15.777 [2024-11-20 07:23:20.290328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.777 [2024-11-20 07:23:20.290360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.777 qpair failed and we were unable to recover it. 00:27:15.777 [2024-11-20 07:23:20.290563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.777 [2024-11-20 07:23:20.290595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.777 qpair failed and we were unable to recover it. 00:27:15.777 [2024-11-20 07:23:20.290853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.777 [2024-11-20 07:23:20.290898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.777 qpair failed and we were unable to recover it. 00:27:15.777 [2024-11-20 07:23:20.291191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.777 [2024-11-20 07:23:20.291230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.777 qpair failed and we were unable to recover it. 00:27:15.777 [2024-11-20 07:23:20.291511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.777 [2024-11-20 07:23:20.291543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.777 qpair failed and we were unable to recover it. 00:27:15.777 [2024-11-20 07:23:20.291819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.777 [2024-11-20 07:23:20.291851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.777 qpair failed and we were unable to recover it. 00:27:15.777 [2024-11-20 07:23:20.292143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.777 [2024-11-20 07:23:20.292184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.777 qpair failed and we were unable to recover it. 00:27:15.777 [2024-11-20 07:23:20.292423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.777 [2024-11-20 07:23:20.292456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.777 qpair failed and we were unable to recover it. 00:27:15.777 [2024-11-20 07:23:20.292746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.777 [2024-11-20 07:23:20.292779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.777 qpair failed and we were unable to recover it. 00:27:15.777 [2024-11-20 07:23:20.293055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.777 [2024-11-20 07:23:20.293091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.777 qpair failed and we were unable to recover it. 00:27:15.777 [2024-11-20 07:23:20.293318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.777 [2024-11-20 07:23:20.293362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.777 qpair failed and we were unable to recover it. 00:27:15.777 [2024-11-20 07:23:20.293658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.777 [2024-11-20 07:23:20.293705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.777 qpair failed and we were unable to recover it. 00:27:15.777 [2024-11-20 07:23:20.293924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.777 [2024-11-20 07:23:20.293970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.777 qpair failed and we were unable to recover it. 00:27:15.777 [2024-11-20 07:23:20.294313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.777 [2024-11-20 07:23:20.294361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.777 qpair failed and we were unable to recover it. 00:27:15.777 [2024-11-20 07:23:20.294704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.777 [2024-11-20 07:23:20.294737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.777 qpair failed and we were unable to recover it. 00:27:15.777 [2024-11-20 07:23:20.294933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.777 [2024-11-20 07:23:20.294974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.777 qpair failed and we were unable to recover it. 00:27:15.777 [2024-11-20 07:23:20.295205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.777 [2024-11-20 07:23:20.295237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.777 qpair failed and we were unable to recover it. 00:27:15.778 [2024-11-20 07:23:20.295515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.778 [2024-11-20 07:23:20.295550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.778 qpair failed and we were unable to recover it. 00:27:15.778 [2024-11-20 07:23:20.295716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.778 [2024-11-20 07:23:20.295751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.778 qpair failed and we were unable to recover it. 00:27:15.778 [2024-11-20 07:23:20.296022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.778 [2024-11-20 07:23:20.296067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.778 qpair failed and we were unable to recover it. 00:27:15.778 [2024-11-20 07:23:20.296373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:15.778 [2024-11-20 07:23:20.296408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:15.778 qpair failed and we were unable to recover it. 00:27:16.055 [2024-11-20 07:23:20.296676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.055 [2024-11-20 07:23:20.296709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.055 qpair failed and we were unable to recover it. 00:27:16.055 [2024-11-20 07:23:20.296854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.055 [2024-11-20 07:23:20.296885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.055 qpair failed and we were unable to recover it. 00:27:16.055 [2024-11-20 07:23:20.297193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.055 [2024-11-20 07:23:20.297228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.055 qpair failed and we were unable to recover it. 00:27:16.055 [2024-11-20 07:23:20.297524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.055 [2024-11-20 07:23:20.297573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.055 qpair failed and we were unable to recover it. 00:27:16.055 [2024-11-20 07:23:20.297874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.055 [2024-11-20 07:23:20.297923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.055 qpair failed and we were unable to recover it. 00:27:16.055 [2024-11-20 07:23:20.298262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.055 [2024-11-20 07:23:20.298310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.055 qpair failed and we were unable to recover it. 00:27:16.055 [2024-11-20 07:23:20.298549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.055 [2024-11-20 07:23:20.298597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.055 qpair failed and we were unable to recover it. 00:27:16.055 [2024-11-20 07:23:20.298894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.055 [2024-11-20 07:23:20.298941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.055 qpair failed and we were unable to recover it. 00:27:16.055 [2024-11-20 07:23:20.299232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.055 [2024-11-20 07:23:20.299280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.055 qpair failed and we were unable to recover it. 00:27:16.055 [2024-11-20 07:23:20.299575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.055 [2024-11-20 07:23:20.299608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.055 qpair failed and we were unable to recover it. 00:27:16.055 [2024-11-20 07:23:20.299907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.055 [2024-11-20 07:23:20.299940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.055 qpair failed and we were unable to recover it. 00:27:16.055 [2024-11-20 07:23:20.300195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.055 [2024-11-20 07:23:20.300229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.055 qpair failed and we were unable to recover it. 00:27:16.055 [2024-11-20 07:23:20.300523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.055 [2024-11-20 07:23:20.300556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.055 qpair failed and we were unable to recover it. 00:27:16.055 [2024-11-20 07:23:20.300696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.055 [2024-11-20 07:23:20.300743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.055 qpair failed and we were unable to recover it. 00:27:16.055 [2024-11-20 07:23:20.300999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.055 [2024-11-20 07:23:20.301034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.055 qpair failed and we were unable to recover it. 00:27:16.055 [2024-11-20 07:23:20.301250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.055 [2024-11-20 07:23:20.301283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.055 qpair failed and we were unable to recover it. 00:27:16.055 [2024-11-20 07:23:20.301563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.055 [2024-11-20 07:23:20.301596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.055 qpair failed and we were unable to recover it. 00:27:16.055 [2024-11-20 07:23:20.301859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.055 [2024-11-20 07:23:20.301892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.055 qpair failed and we were unable to recover it. 00:27:16.055 [2024-11-20 07:23:20.302211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.055 [2024-11-20 07:23:20.302245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.055 qpair failed and we were unable to recover it. 00:27:16.055 [2024-11-20 07:23:20.302450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.055 [2024-11-20 07:23:20.302482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.055 qpair failed and we were unable to recover it. 00:27:16.055 [2024-11-20 07:23:20.302827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.055 [2024-11-20 07:23:20.302860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.055 qpair failed and we were unable to recover it. 00:27:16.055 [2024-11-20 07:23:20.303180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.055 [2024-11-20 07:23:20.303215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.055 qpair failed and we were unable to recover it. 00:27:16.055 [2024-11-20 07:23:20.303419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.055 [2024-11-20 07:23:20.303452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.055 qpair failed and we were unable to recover it. 00:27:16.055 [2024-11-20 07:23:20.303595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.055 [2024-11-20 07:23:20.303628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.055 qpair failed and we were unable to recover it. 00:27:16.055 [2024-11-20 07:23:20.303898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.056 [2024-11-20 07:23:20.303931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.056 qpair failed and we were unable to recover it. 00:27:16.056 [2024-11-20 07:23:20.304242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.056 [2024-11-20 07:23:20.304276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.056 qpair failed and we were unable to recover it. 00:27:16.056 [2024-11-20 07:23:20.304536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.056 [2024-11-20 07:23:20.304573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.056 qpair failed and we were unable to recover it. 00:27:16.056 [2024-11-20 07:23:20.304795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.056 [2024-11-20 07:23:20.304831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.056 qpair failed and we were unable to recover it. 00:27:16.056 [2024-11-20 07:23:20.305036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.056 [2024-11-20 07:23:20.305071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.056 qpair failed and we were unable to recover it. 00:27:16.056 [2024-11-20 07:23:20.305332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.056 [2024-11-20 07:23:20.305366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.056 qpair failed and we were unable to recover it. 00:27:16.056 [2024-11-20 07:23:20.305644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.056 [2024-11-20 07:23:20.305677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.056 qpair failed and we were unable to recover it. 00:27:16.056 [2024-11-20 07:23:20.305938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.056 [2024-11-20 07:23:20.305980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.056 qpair failed and we were unable to recover it. 00:27:16.056 [2024-11-20 07:23:20.306275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.056 [2024-11-20 07:23:20.306308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.056 qpair failed and we were unable to recover it. 00:27:16.056 [2024-11-20 07:23:20.306567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.056 [2024-11-20 07:23:20.306600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.056 qpair failed and we were unable to recover it. 00:27:16.056 [2024-11-20 07:23:20.306796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.056 [2024-11-20 07:23:20.306829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.056 qpair failed and we were unable to recover it. 00:27:16.056 [2024-11-20 07:23:20.307073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.056 [2024-11-20 07:23:20.307108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.056 qpair failed and we were unable to recover it. 00:27:16.056 [2024-11-20 07:23:20.307420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.056 [2024-11-20 07:23:20.307453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.056 qpair failed and we were unable to recover it. 00:27:16.056 [2024-11-20 07:23:20.307720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.056 [2024-11-20 07:23:20.307752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.056 qpair failed and we were unable to recover it. 00:27:16.056 [2024-11-20 07:23:20.308038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.056 [2024-11-20 07:23:20.308073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.056 qpair failed and we were unable to recover it. 00:27:16.056 [2024-11-20 07:23:20.308355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.056 [2024-11-20 07:23:20.308387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.056 qpair failed and we were unable to recover it. 00:27:16.056 [2024-11-20 07:23:20.308671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.056 [2024-11-20 07:23:20.308704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.056 qpair failed and we were unable to recover it. 00:27:16.056 [2024-11-20 07:23:20.308986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.056 [2024-11-20 07:23:20.309021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.056 qpair failed and we were unable to recover it. 00:27:16.056 [2024-11-20 07:23:20.309224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.056 [2024-11-20 07:23:20.309257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.056 qpair failed and we were unable to recover it. 00:27:16.056 [2024-11-20 07:23:20.309445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.056 [2024-11-20 07:23:20.309478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.056 qpair failed and we were unable to recover it. 00:27:16.056 [2024-11-20 07:23:20.309759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.056 [2024-11-20 07:23:20.309792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.056 qpair failed and we were unable to recover it. 00:27:16.056 [2024-11-20 07:23:20.310057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.056 [2024-11-20 07:23:20.310091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.056 qpair failed and we were unable to recover it. 00:27:16.056 [2024-11-20 07:23:20.310322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.056 [2024-11-20 07:23:20.310355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.056 qpair failed and we were unable to recover it. 00:27:16.056 [2024-11-20 07:23:20.310657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.056 [2024-11-20 07:23:20.310690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.056 qpair failed and we were unable to recover it. 00:27:16.056 [2024-11-20 07:23:20.310961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.056 [2024-11-20 07:23:20.310995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.056 qpair failed and we were unable to recover it. 00:27:16.056 [2024-11-20 07:23:20.311251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.056 [2024-11-20 07:23:20.311284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.056 qpair failed and we were unable to recover it. 00:27:16.056 [2024-11-20 07:23:20.311487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.056 [2024-11-20 07:23:20.311520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.056 qpair failed and we were unable to recover it. 00:27:16.056 [2024-11-20 07:23:20.311793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.056 [2024-11-20 07:23:20.311825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.056 qpair failed and we were unable to recover it. 00:27:16.056 [2024-11-20 07:23:20.312032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.056 [2024-11-20 07:23:20.312066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.056 qpair failed and we were unable to recover it. 00:27:16.056 [2024-11-20 07:23:20.312337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.056 [2024-11-20 07:23:20.312377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.056 qpair failed and we were unable to recover it. 00:27:16.056 [2024-11-20 07:23:20.312657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.056 [2024-11-20 07:23:20.312690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.056 qpair failed and we were unable to recover it. 00:27:16.056 [2024-11-20 07:23:20.312934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.056 [2024-11-20 07:23:20.312979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.057 qpair failed and we were unable to recover it. 00:27:16.057 [2024-11-20 07:23:20.313210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.057 [2024-11-20 07:23:20.313243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.057 qpair failed and we were unable to recover it. 00:27:16.057 [2024-11-20 07:23:20.313468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.057 [2024-11-20 07:23:20.313501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.057 qpair failed and we were unable to recover it. 00:27:16.057 [2024-11-20 07:23:20.313691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.057 [2024-11-20 07:23:20.313723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.057 qpair failed and we were unable to recover it. 00:27:16.057 [2024-11-20 07:23:20.313910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.057 [2024-11-20 07:23:20.313943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.057 qpair failed and we were unable to recover it. 00:27:16.057 [2024-11-20 07:23:20.314240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.057 [2024-11-20 07:23:20.314274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.057 qpair failed and we were unable to recover it. 00:27:16.057 [2024-11-20 07:23:20.314538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.057 [2024-11-20 07:23:20.314571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.057 qpair failed and we were unable to recover it. 00:27:16.057 [2024-11-20 07:23:20.314873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.057 [2024-11-20 07:23:20.314908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.057 qpair failed and we were unable to recover it. 00:27:16.057 [2024-11-20 07:23:20.315208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.057 [2024-11-20 07:23:20.315244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.057 qpair failed and we were unable to recover it. 00:27:16.057 [2024-11-20 07:23:20.315502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.057 [2024-11-20 07:23:20.315534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.057 qpair failed and we were unable to recover it. 00:27:16.057 [2024-11-20 07:23:20.315730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.057 [2024-11-20 07:23:20.315763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.057 qpair failed and we were unable to recover it. 00:27:16.057 [2024-11-20 07:23:20.315995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.057 [2024-11-20 07:23:20.316032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.057 qpair failed and we were unable to recover it. 00:27:16.057 [2024-11-20 07:23:20.316259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.057 [2024-11-20 07:23:20.316293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.057 qpair failed and we were unable to recover it. 00:27:16.057 [2024-11-20 07:23:20.316490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.057 [2024-11-20 07:23:20.316523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.057 qpair failed and we were unable to recover it. 00:27:16.057 [2024-11-20 07:23:20.316732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.057 [2024-11-20 07:23:20.316765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.057 qpair failed and we were unable to recover it. 00:27:16.057 [2024-11-20 07:23:20.316971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.057 [2024-11-20 07:23:20.317005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.057 qpair failed and we were unable to recover it. 00:27:16.057 [2024-11-20 07:23:20.317277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.057 [2024-11-20 07:23:20.317310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.057 qpair failed and we were unable to recover it. 00:27:16.057 [2024-11-20 07:23:20.317590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.057 [2024-11-20 07:23:20.317624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.057 qpair failed and we were unable to recover it. 00:27:16.057 [2024-11-20 07:23:20.317912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.057 [2024-11-20 07:23:20.317945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.057 qpair failed and we were unable to recover it. 00:27:16.057 [2024-11-20 07:23:20.318224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.057 [2024-11-20 07:23:20.318257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.057 qpair failed and we were unable to recover it. 00:27:16.057 [2024-11-20 07:23:20.318544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.057 [2024-11-20 07:23:20.318577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.057 qpair failed and we were unable to recover it. 00:27:16.057 [2024-11-20 07:23:20.318773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.057 [2024-11-20 07:23:20.318806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.057 qpair failed and we were unable to recover it. 00:27:16.057 [2024-11-20 07:23:20.319004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.057 [2024-11-20 07:23:20.319039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.057 qpair failed and we were unable to recover it. 00:27:16.057 [2024-11-20 07:23:20.319292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.057 [2024-11-20 07:23:20.319325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.057 qpair failed and we were unable to recover it. 00:27:16.057 [2024-11-20 07:23:20.319576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.057 [2024-11-20 07:23:20.319610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.057 qpair failed and we were unable to recover it. 00:27:16.057 [2024-11-20 07:23:20.319893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.057 [2024-11-20 07:23:20.319926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.057 qpair failed and we were unable to recover it. 00:27:16.057 [2024-11-20 07:23:20.320237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.057 [2024-11-20 07:23:20.320269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.057 qpair failed and we were unable to recover it. 00:27:16.057 [2024-11-20 07:23:20.320473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.057 [2024-11-20 07:23:20.320506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.057 qpair failed and we were unable to recover it. 00:27:16.057 [2024-11-20 07:23:20.320810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.057 [2024-11-20 07:23:20.320843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.057 qpair failed and we were unable to recover it. 00:27:16.057 [2024-11-20 07:23:20.321126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.057 [2024-11-20 07:23:20.321161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.057 qpair failed and we were unable to recover it. 00:27:16.057 [2024-11-20 07:23:20.321442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.057 [2024-11-20 07:23:20.321475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.057 qpair failed and we were unable to recover it. 00:27:16.057 [2024-11-20 07:23:20.321757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.058 [2024-11-20 07:23:20.321789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.058 qpair failed and we were unable to recover it. 00:27:16.058 [2024-11-20 07:23:20.322089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.058 [2024-11-20 07:23:20.322123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.058 qpair failed and we were unable to recover it. 00:27:16.058 [2024-11-20 07:23:20.322400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.058 [2024-11-20 07:23:20.322432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.058 qpair failed and we were unable to recover it. 00:27:16.058 [2024-11-20 07:23:20.322746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.058 [2024-11-20 07:23:20.322779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.058 qpair failed and we were unable to recover it. 00:27:16.058 [2024-11-20 07:23:20.323056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.058 [2024-11-20 07:23:20.323091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.058 qpair failed and we were unable to recover it. 00:27:16.058 [2024-11-20 07:23:20.323290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.058 [2024-11-20 07:23:20.323323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.058 qpair failed and we were unable to recover it. 00:27:16.058 [2024-11-20 07:23:20.323594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.058 [2024-11-20 07:23:20.323627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.058 qpair failed and we were unable to recover it. 00:27:16.058 [2024-11-20 07:23:20.323822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.058 [2024-11-20 07:23:20.323861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.058 qpair failed and we were unable to recover it. 00:27:16.058 [2024-11-20 07:23:20.324142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.058 [2024-11-20 07:23:20.324176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.058 qpair failed and we were unable to recover it. 00:27:16.058 [2024-11-20 07:23:20.324375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.058 [2024-11-20 07:23:20.324408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.058 qpair failed and we were unable to recover it. 00:27:16.058 [2024-11-20 07:23:20.324661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.058 [2024-11-20 07:23:20.324694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.058 qpair failed and we were unable to recover it. 00:27:16.058 [2024-11-20 07:23:20.324878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.058 [2024-11-20 07:23:20.324911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.058 qpair failed and we were unable to recover it. 00:27:16.058 [2024-11-20 07:23:20.325200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.058 [2024-11-20 07:23:20.325235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.058 qpair failed and we were unable to recover it. 00:27:16.058 [2024-11-20 07:23:20.325461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.058 [2024-11-20 07:23:20.325495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.058 qpair failed and we were unable to recover it. 00:27:16.058 [2024-11-20 07:23:20.325742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.058 [2024-11-20 07:23:20.325774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.058 qpair failed and we were unable to recover it. 00:27:16.058 [2024-11-20 07:23:20.326030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.058 [2024-11-20 07:23:20.326067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.058 qpair failed and we were unable to recover it. 00:27:16.058 [2024-11-20 07:23:20.326342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.058 [2024-11-20 07:23:20.326377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.058 qpair failed and we were unable to recover it. 00:27:16.058 [2024-11-20 07:23:20.326659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.058 [2024-11-20 07:23:20.326692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.058 qpair failed and we were unable to recover it. 00:27:16.058 [2024-11-20 07:23:20.326982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.058 [2024-11-20 07:23:20.327018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.058 qpair failed and we were unable to recover it. 00:27:16.058 [2024-11-20 07:23:20.327291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.058 [2024-11-20 07:23:20.327324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.058 qpair failed and we were unable to recover it. 00:27:16.058 [2024-11-20 07:23:20.327602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.058 [2024-11-20 07:23:20.327634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.058 qpair failed and we were unable to recover it. 00:27:16.058 [2024-11-20 07:23:20.327924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.058 [2024-11-20 07:23:20.327971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.058 qpair failed and we were unable to recover it. 00:27:16.058 [2024-11-20 07:23:20.328169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.058 [2024-11-20 07:23:20.328201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.058 qpair failed and we were unable to recover it. 00:27:16.058 [2024-11-20 07:23:20.328475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.058 [2024-11-20 07:23:20.328509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.058 qpair failed and we were unable to recover it. 00:27:16.058 [2024-11-20 07:23:20.328793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.058 [2024-11-20 07:23:20.328835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.058 qpair failed and we were unable to recover it. 00:27:16.058 [2024-11-20 07:23:20.329140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.058 [2024-11-20 07:23:20.329176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.058 qpair failed and we were unable to recover it. 00:27:16.058 [2024-11-20 07:23:20.329475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.058 [2024-11-20 07:23:20.329507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.058 qpair failed and we were unable to recover it. 00:27:16.058 [2024-11-20 07:23:20.329769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.058 [2024-11-20 07:23:20.329817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.058 qpair failed and we were unable to recover it. 00:27:16.058 [2024-11-20 07:23:20.330101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.058 [2024-11-20 07:23:20.330136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.058 qpair failed and we were unable to recover it. 00:27:16.058 [2024-11-20 07:23:20.330373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.058 [2024-11-20 07:23:20.330405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.058 qpair failed and we were unable to recover it. 00:27:16.058 [2024-11-20 07:23:20.330707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.058 [2024-11-20 07:23:20.330745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.058 qpair failed and we were unable to recover it. 00:27:16.058 [2024-11-20 07:23:20.330970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.059 [2024-11-20 07:23:20.331004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.059 qpair failed and we were unable to recover it. 00:27:16.059 [2024-11-20 07:23:20.331279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.059 [2024-11-20 07:23:20.331312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.059 qpair failed and we were unable to recover it. 00:27:16.059 [2024-11-20 07:23:20.331494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.059 [2024-11-20 07:23:20.331527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.059 qpair failed and we were unable to recover it. 00:27:16.059 [2024-11-20 07:23:20.331812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.059 [2024-11-20 07:23:20.331845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.059 qpair failed and we were unable to recover it. 00:27:16.059 [2024-11-20 07:23:20.332126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.059 [2024-11-20 07:23:20.332161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.059 qpair failed and we were unable to recover it. 00:27:16.059 [2024-11-20 07:23:20.332427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.059 [2024-11-20 07:23:20.332461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.059 qpair failed and we were unable to recover it. 00:27:16.059 [2024-11-20 07:23:20.332755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.059 [2024-11-20 07:23:20.332788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.059 qpair failed and we were unable to recover it. 00:27:16.059 [2024-11-20 07:23:20.333052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.059 [2024-11-20 07:23:20.333087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.059 qpair failed and we were unable to recover it. 00:27:16.059 [2024-11-20 07:23:20.333377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.059 [2024-11-20 07:23:20.333411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.059 qpair failed and we were unable to recover it. 00:27:16.059 [2024-11-20 07:23:20.333729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.059 [2024-11-20 07:23:20.333762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.059 qpair failed and we were unable to recover it. 00:27:16.059 [2024-11-20 07:23:20.334065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.059 [2024-11-20 07:23:20.334101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.059 qpair failed and we were unable to recover it. 00:27:16.059 [2024-11-20 07:23:20.334361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.059 [2024-11-20 07:23:20.334394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.059 qpair failed and we were unable to recover it. 00:27:16.059 [2024-11-20 07:23:20.334623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.059 [2024-11-20 07:23:20.334656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.059 qpair failed and we were unable to recover it. 00:27:16.059 [2024-11-20 07:23:20.334922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.059 [2024-11-20 07:23:20.334975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.059 qpair failed and we were unable to recover it. 00:27:16.059 [2024-11-20 07:23:20.335111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.059 [2024-11-20 07:23:20.335144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.059 qpair failed and we were unable to recover it. 00:27:16.059 [2024-11-20 07:23:20.335397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.059 [2024-11-20 07:23:20.335430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.059 qpair failed and we were unable to recover it. 00:27:16.059 [2024-11-20 07:23:20.335707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.059 [2024-11-20 07:23:20.335747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.059 qpair failed and we were unable to recover it. 00:27:16.059 [2024-11-20 07:23:20.336001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.059 [2024-11-20 07:23:20.336036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.059 qpair failed and we were unable to recover it. 00:27:16.059 [2024-11-20 07:23:20.336336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.059 [2024-11-20 07:23:20.336373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.059 qpair failed and we were unable to recover it. 00:27:16.059 [2024-11-20 07:23:20.336660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.059 [2024-11-20 07:23:20.336694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.059 qpair failed and we were unable to recover it. 00:27:16.059 [2024-11-20 07:23:20.336887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.059 [2024-11-20 07:23:20.336920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.059 qpair failed and we were unable to recover it. 00:27:16.059 [2024-11-20 07:23:20.337135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.059 [2024-11-20 07:23:20.337169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.059 qpair failed and we were unable to recover it. 00:27:16.059 [2024-11-20 07:23:20.337430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.059 [2024-11-20 07:23:20.337463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.059 qpair failed and we were unable to recover it. 00:27:16.059 [2024-11-20 07:23:20.337768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.059 [2024-11-20 07:23:20.337801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.059 qpair failed and we were unable to recover it. 00:27:16.059 [2024-11-20 07:23:20.338088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.059 [2024-11-20 07:23:20.338123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.059 qpair failed and we were unable to recover it. 00:27:16.059 [2024-11-20 07:23:20.338317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.059 [2024-11-20 07:23:20.338350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.059 qpair failed and we were unable to recover it. 00:27:16.059 [2024-11-20 07:23:20.338615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.059 [2024-11-20 07:23:20.338648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.059 qpair failed and we were unable to recover it. 00:27:16.059 [2024-11-20 07:23:20.338865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.059 [2024-11-20 07:23:20.338899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.059 qpair failed and we were unable to recover it. 00:27:16.059 [2024-11-20 07:23:20.339183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.059 [2024-11-20 07:23:20.339218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.059 qpair failed and we were unable to recover it. 00:27:16.059 [2024-11-20 07:23:20.339503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.059 [2024-11-20 07:23:20.339537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.059 qpair failed and we were unable to recover it. 00:27:16.059 [2024-11-20 07:23:20.339680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.059 [2024-11-20 07:23:20.339714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.059 qpair failed and we were unable to recover it. 00:27:16.059 [2024-11-20 07:23:20.339969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.059 [2024-11-20 07:23:20.340003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.060 qpair failed and we were unable to recover it. 00:27:16.060 [2024-11-20 07:23:20.340253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.060 [2024-11-20 07:23:20.340287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.060 qpair failed and we were unable to recover it. 00:27:16.060 [2024-11-20 07:23:20.340585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.060 [2024-11-20 07:23:20.340618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.060 qpair failed and we were unable to recover it. 00:27:16.060 [2024-11-20 07:23:20.340912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.060 [2024-11-20 07:23:20.340945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.060 qpair failed and we were unable to recover it. 00:27:16.060 [2024-11-20 07:23:20.341198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.060 [2024-11-20 07:23:20.341232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.060 qpair failed and we were unable to recover it. 00:27:16.060 [2024-11-20 07:23:20.341432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.060 [2024-11-20 07:23:20.341465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.060 qpair failed and we were unable to recover it. 00:27:16.060 [2024-11-20 07:23:20.341738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.060 [2024-11-20 07:23:20.341771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.060 qpair failed and we were unable to recover it. 00:27:16.060 [2024-11-20 07:23:20.342086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.060 [2024-11-20 07:23:20.342121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.060 qpair failed and we were unable to recover it. 00:27:16.060 [2024-11-20 07:23:20.342398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.060 [2024-11-20 07:23:20.342431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.060 qpair failed and we were unable to recover it. 00:27:16.060 [2024-11-20 07:23:20.342711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.060 [2024-11-20 07:23:20.342744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.060 qpair failed and we were unable to recover it. 00:27:16.060 [2024-11-20 07:23:20.343036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.060 [2024-11-20 07:23:20.343071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.060 qpair failed and we were unable to recover it. 00:27:16.060 [2024-11-20 07:23:20.343344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.060 [2024-11-20 07:23:20.343377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.060 qpair failed and we were unable to recover it. 00:27:16.060 [2024-11-20 07:23:20.343667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.060 [2024-11-20 07:23:20.343701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.060 qpair failed and we were unable to recover it. 00:27:16.060 [2024-11-20 07:23:20.343980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.060 [2024-11-20 07:23:20.344015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.060 qpair failed and we were unable to recover it. 00:27:16.060 [2024-11-20 07:23:20.344277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.060 [2024-11-20 07:23:20.344310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.060 qpair failed and we were unable to recover it. 00:27:16.060 [2024-11-20 07:23:20.344613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.060 [2024-11-20 07:23:20.344645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.060 qpair failed and we were unable to recover it. 00:27:16.060 [2024-11-20 07:23:20.344870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.060 [2024-11-20 07:23:20.344903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.060 qpair failed and we were unable to recover it. 00:27:16.060 [2024-11-20 07:23:20.345112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.060 [2024-11-20 07:23:20.345145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.060 qpair failed and we were unable to recover it. 00:27:16.060 [2024-11-20 07:23:20.345443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.060 [2024-11-20 07:23:20.345476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.060 qpair failed and we were unable to recover it. 00:27:16.060 [2024-11-20 07:23:20.345708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.060 [2024-11-20 07:23:20.345740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.060 qpair failed and we were unable to recover it. 00:27:16.060 [2024-11-20 07:23:20.346005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.060 [2024-11-20 07:23:20.346040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.060 qpair failed and we were unable to recover it. 00:27:16.060 [2024-11-20 07:23:20.346195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.060 [2024-11-20 07:23:20.346227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.060 qpair failed and we were unable to recover it. 00:27:16.060 [2024-11-20 07:23:20.346437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.060 [2024-11-20 07:23:20.346472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.060 qpair failed and we were unable to recover it. 00:27:16.060 [2024-11-20 07:23:20.346668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.060 [2024-11-20 07:23:20.346702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.060 qpair failed and we were unable to recover it. 00:27:16.060 [2024-11-20 07:23:20.346980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.060 [2024-11-20 07:23:20.347015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.060 qpair failed and we were unable to recover it. 00:27:16.060 [2024-11-20 07:23:20.347269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.060 [2024-11-20 07:23:20.347328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.060 qpair failed and we were unable to recover it. 00:27:16.060 [2024-11-20 07:23:20.347617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.060 [2024-11-20 07:23:20.347649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.060 qpair failed and we were unable to recover it. 00:27:16.061 [2024-11-20 07:23:20.347848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.061 [2024-11-20 07:23:20.347882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.061 qpair failed and we were unable to recover it. 00:27:16.061 [2024-11-20 07:23:20.348113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.061 [2024-11-20 07:23:20.348146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.061 qpair failed and we were unable to recover it. 00:27:16.061 [2024-11-20 07:23:20.348283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.061 [2024-11-20 07:23:20.348316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.061 qpair failed and we were unable to recover it. 00:27:16.061 [2024-11-20 07:23:20.348498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.061 [2024-11-20 07:23:20.348531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.061 qpair failed and we were unable to recover it. 00:27:16.061 [2024-11-20 07:23:20.348853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.061 [2024-11-20 07:23:20.348886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.061 qpair failed and we were unable to recover it. 00:27:16.061 [2024-11-20 07:23:20.349095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.061 [2024-11-20 07:23:20.349129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.061 qpair failed and we were unable to recover it. 00:27:16.061 [2024-11-20 07:23:20.349371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.061 [2024-11-20 07:23:20.349404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.061 qpair failed and we were unable to recover it. 00:27:16.061 [2024-11-20 07:23:20.349660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.061 [2024-11-20 07:23:20.349692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.061 qpair failed and we were unable to recover it. 00:27:16.061 [2024-11-20 07:23:20.349891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.061 [2024-11-20 07:23:20.349925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.061 qpair failed and we were unable to recover it. 00:27:16.061 [2024-11-20 07:23:20.350149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.061 [2024-11-20 07:23:20.350181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.061 qpair failed and we were unable to recover it. 00:27:16.061 [2024-11-20 07:23:20.350319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.061 [2024-11-20 07:23:20.350351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.061 qpair failed and we were unable to recover it. 00:27:16.061 [2024-11-20 07:23:20.350657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.061 [2024-11-20 07:23:20.350691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.061 qpair failed and we were unable to recover it. 00:27:16.061 [2024-11-20 07:23:20.350903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.061 [2024-11-20 07:23:20.350935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.061 qpair failed and we were unable to recover it. 00:27:16.061 [2024-11-20 07:23:20.351153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.061 [2024-11-20 07:23:20.351186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.061 qpair failed and we were unable to recover it. 00:27:16.061 [2024-11-20 07:23:20.351389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.061 [2024-11-20 07:23:20.351424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.061 qpair failed and we were unable to recover it. 00:27:16.061 [2024-11-20 07:23:20.351737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.061 [2024-11-20 07:23:20.351771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.061 qpair failed and we were unable to recover it. 00:27:16.061 [2024-11-20 07:23:20.352074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.061 [2024-11-20 07:23:20.352109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.061 qpair failed and we were unable to recover it. 00:27:16.061 [2024-11-20 07:23:20.352371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.061 [2024-11-20 07:23:20.352404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.061 qpair failed and we were unable to recover it. 00:27:16.061 [2024-11-20 07:23:20.352557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.061 [2024-11-20 07:23:20.352590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.061 qpair failed and we were unable to recover it. 00:27:16.061 [2024-11-20 07:23:20.352847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.061 [2024-11-20 07:23:20.352881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.061 qpair failed and we were unable to recover it. 00:27:16.061 [2024-11-20 07:23:20.353086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.061 [2024-11-20 07:23:20.353121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.061 qpair failed and we were unable to recover it. 00:27:16.061 [2024-11-20 07:23:20.353321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.061 [2024-11-20 07:23:20.353355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.061 qpair failed and we were unable to recover it. 00:27:16.061 [2024-11-20 07:23:20.353572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.061 [2024-11-20 07:23:20.353605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.061 qpair failed and we were unable to recover it. 00:27:16.061 [2024-11-20 07:23:20.353791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.061 [2024-11-20 07:23:20.353825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.061 qpair failed and we were unable to recover it. 00:27:16.061 [2024-11-20 07:23:20.354054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.061 [2024-11-20 07:23:20.354089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.061 qpair failed and we were unable to recover it. 00:27:16.061 [2024-11-20 07:23:20.354223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.061 [2024-11-20 07:23:20.354256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.061 qpair failed and we were unable to recover it. 00:27:16.061 [2024-11-20 07:23:20.354496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.061 [2024-11-20 07:23:20.354530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.061 qpair failed and we were unable to recover it. 00:27:16.061 [2024-11-20 07:23:20.354758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.061 [2024-11-20 07:23:20.354791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.061 qpair failed and we were unable to recover it. 00:27:16.061 [2024-11-20 07:23:20.355055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.061 [2024-11-20 07:23:20.355091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.061 qpair failed and we were unable to recover it. 00:27:16.061 [2024-11-20 07:23:20.355295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.061 [2024-11-20 07:23:20.355327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.061 qpair failed and we were unable to recover it. 00:27:16.061 [2024-11-20 07:23:20.355536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.061 [2024-11-20 07:23:20.355570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.061 qpair failed and we were unable to recover it. 00:27:16.061 [2024-11-20 07:23:20.355798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.062 [2024-11-20 07:23:20.355831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.062 qpair failed and we were unable to recover it. 00:27:16.062 [2024-11-20 07:23:20.356042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.062 [2024-11-20 07:23:20.356078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.062 qpair failed and we were unable to recover it. 00:27:16.062 [2024-11-20 07:23:20.356226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.062 [2024-11-20 07:23:20.356260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.062 qpair failed and we were unable to recover it. 00:27:16.062 [2024-11-20 07:23:20.356470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.062 [2024-11-20 07:23:20.356504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.062 qpair failed and we were unable to recover it. 00:27:16.062 [2024-11-20 07:23:20.356819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.062 [2024-11-20 07:23:20.356852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.062 qpair failed and we were unable to recover it. 00:27:16.062 [2024-11-20 07:23:20.357060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.062 [2024-11-20 07:23:20.357096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.062 qpair failed and we were unable to recover it. 00:27:16.062 [2024-11-20 07:23:20.357281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.062 [2024-11-20 07:23:20.357315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.062 qpair failed and we were unable to recover it. 00:27:16.062 [2024-11-20 07:23:20.357513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.062 [2024-11-20 07:23:20.357558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.062 qpair failed and we were unable to recover it. 00:27:16.062 [2024-11-20 07:23:20.357834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.062 [2024-11-20 07:23:20.357868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.062 qpair failed and we were unable to recover it. 00:27:16.062 [2024-11-20 07:23:20.358125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.062 [2024-11-20 07:23:20.358161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.062 qpair failed and we were unable to recover it. 00:27:16.062 [2024-11-20 07:23:20.358414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.062 [2024-11-20 07:23:20.358448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.062 qpair failed and we were unable to recover it. 00:27:16.062 [2024-11-20 07:23:20.358701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.062 [2024-11-20 07:23:20.358733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.062 qpair failed and we were unable to recover it. 00:27:16.062 [2024-11-20 07:23:20.358990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.062 [2024-11-20 07:23:20.359027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.062 qpair failed and we were unable to recover it. 00:27:16.062 [2024-11-20 07:23:20.359236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.062 [2024-11-20 07:23:20.359269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.062 qpair failed and we were unable to recover it. 00:27:16.062 [2024-11-20 07:23:20.359530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.062 [2024-11-20 07:23:20.359562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.062 qpair failed and we were unable to recover it. 00:27:16.062 [2024-11-20 07:23:20.359816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.062 [2024-11-20 07:23:20.359850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.062 qpair failed and we were unable to recover it. 00:27:16.062 [2024-11-20 07:23:20.360134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.062 [2024-11-20 07:23:20.360170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.062 qpair failed and we were unable to recover it. 00:27:16.062 [2024-11-20 07:23:20.360376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.062 [2024-11-20 07:23:20.360409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.062 qpair failed and we were unable to recover it. 00:27:16.062 [2024-11-20 07:23:20.360687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.062 [2024-11-20 07:23:20.360720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.062 qpair failed and we were unable to recover it. 00:27:16.062 [2024-11-20 07:23:20.360925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.062 [2024-11-20 07:23:20.360970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.062 qpair failed and we were unable to recover it. 00:27:16.062 [2024-11-20 07:23:20.361111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.062 [2024-11-20 07:23:20.361144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.062 qpair failed and we were unable to recover it. 00:27:16.062 [2024-11-20 07:23:20.361350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.062 [2024-11-20 07:23:20.361384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.062 qpair failed and we were unable to recover it. 00:27:16.062 [2024-11-20 07:23:20.361515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.062 [2024-11-20 07:23:20.361547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.062 qpair failed and we were unable to recover it. 00:27:16.062 [2024-11-20 07:23:20.361779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.062 [2024-11-20 07:23:20.361811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.062 qpair failed and we were unable to recover it. 00:27:16.062 [2024-11-20 07:23:20.362075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.062 [2024-11-20 07:23:20.362109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.062 qpair failed and we were unable to recover it. 00:27:16.062 [2024-11-20 07:23:20.362235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.062 [2024-11-20 07:23:20.362268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.062 qpair failed and we were unable to recover it. 00:27:16.062 [2024-11-20 07:23:20.362452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.062 [2024-11-20 07:23:20.362485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.062 qpair failed and we were unable to recover it. 00:27:16.062 [2024-11-20 07:23:20.362777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.062 [2024-11-20 07:23:20.362811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.062 qpair failed and we were unable to recover it. 00:27:16.062 [2024-11-20 07:23:20.363084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.062 [2024-11-20 07:23:20.363118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.062 qpair failed and we were unable to recover it. 00:27:16.062 [2024-11-20 07:23:20.363333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.062 [2024-11-20 07:23:20.363366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.062 qpair failed and we were unable to recover it. 00:27:16.062 [2024-11-20 07:23:20.363515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.062 [2024-11-20 07:23:20.363547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.062 qpair failed and we were unable to recover it. 00:27:16.062 [2024-11-20 07:23:20.363750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.062 [2024-11-20 07:23:20.363784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.063 qpair failed and we were unable to recover it. 00:27:16.063 [2024-11-20 07:23:20.364041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.063 [2024-11-20 07:23:20.364075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.063 qpair failed and we were unable to recover it. 00:27:16.063 [2024-11-20 07:23:20.364327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.063 [2024-11-20 07:23:20.364360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.063 qpair failed and we were unable to recover it. 00:27:16.063 [2024-11-20 07:23:20.364567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.063 [2024-11-20 07:23:20.364601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.063 qpair failed and we were unable to recover it. 00:27:16.063 [2024-11-20 07:23:20.364807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.063 [2024-11-20 07:23:20.364840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.063 qpair failed and we were unable to recover it. 00:27:16.063 [2024-11-20 07:23:20.365022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.063 [2024-11-20 07:23:20.365057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.063 qpair failed and we were unable to recover it. 00:27:16.063 [2024-11-20 07:23:20.365257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.063 [2024-11-20 07:23:20.365291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.063 qpair failed and we were unable to recover it. 00:27:16.063 [2024-11-20 07:23:20.365446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.063 [2024-11-20 07:23:20.365479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.063 qpair failed and we were unable to recover it. 00:27:16.063 [2024-11-20 07:23:20.365691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.063 [2024-11-20 07:23:20.365725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.063 qpair failed and we were unable to recover it. 00:27:16.063 [2024-11-20 07:23:20.365973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.063 [2024-11-20 07:23:20.366008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.063 qpair failed and we were unable to recover it. 00:27:16.063 [2024-11-20 07:23:20.366205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.063 [2024-11-20 07:23:20.366238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.063 qpair failed and we were unable to recover it. 00:27:16.063 [2024-11-20 07:23:20.366380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.063 [2024-11-20 07:23:20.366412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.063 qpair failed and we were unable to recover it. 00:27:16.063 [2024-11-20 07:23:20.366647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.063 [2024-11-20 07:23:20.366681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.063 qpair failed and we were unable to recover it. 00:27:16.063 [2024-11-20 07:23:20.366908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.063 [2024-11-20 07:23:20.366941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.063 qpair failed and we were unable to recover it. 00:27:16.063 [2024-11-20 07:23:20.367197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.063 [2024-11-20 07:23:20.367231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.063 qpair failed and we were unable to recover it. 00:27:16.063 [2024-11-20 07:23:20.367456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.063 [2024-11-20 07:23:20.367490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.063 qpair failed and we were unable to recover it. 00:27:16.063 [2024-11-20 07:23:20.367815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.063 [2024-11-20 07:23:20.367858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.063 qpair failed and we were unable to recover it. 00:27:16.063 [2024-11-20 07:23:20.368055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.063 [2024-11-20 07:23:20.368091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.063 qpair failed and we were unable to recover it. 00:27:16.063 [2024-11-20 07:23:20.368297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.063 [2024-11-20 07:23:20.368331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.063 qpair failed and we were unable to recover it. 00:27:16.063 [2024-11-20 07:23:20.368623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.063 [2024-11-20 07:23:20.368656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.063 qpair failed and we were unable to recover it. 00:27:16.063 [2024-11-20 07:23:20.368964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.063 [2024-11-20 07:23:20.368998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.063 qpair failed and we were unable to recover it. 00:27:16.063 [2024-11-20 07:23:20.369204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.063 [2024-11-20 07:23:20.369238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.063 qpair failed and we were unable to recover it. 00:27:16.063 [2024-11-20 07:23:20.369380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.063 [2024-11-20 07:23:20.369414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.063 qpair failed and we were unable to recover it. 00:27:16.063 [2024-11-20 07:23:20.369697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.063 [2024-11-20 07:23:20.369729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.063 qpair failed and we were unable to recover it. 00:27:16.063 [2024-11-20 07:23:20.370011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.063 [2024-11-20 07:23:20.370046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.063 qpair failed and we were unable to recover it. 00:27:16.063 [2024-11-20 07:23:20.370194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.063 [2024-11-20 07:23:20.370227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.063 qpair failed and we were unable to recover it. 00:27:16.063 [2024-11-20 07:23:20.370429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.063 [2024-11-20 07:23:20.370462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.063 qpair failed and we were unable to recover it. 00:27:16.063 [2024-11-20 07:23:20.370674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.063 [2024-11-20 07:23:20.370707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.063 qpair failed and we were unable to recover it. 00:27:16.063 [2024-11-20 07:23:20.370915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.063 [2024-11-20 07:23:20.370959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.063 qpair failed and we were unable to recover it. 00:27:16.063 [2024-11-20 07:23:20.371162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.063 [2024-11-20 07:23:20.371196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.063 qpair failed and we were unable to recover it. 00:27:16.063 [2024-11-20 07:23:20.371332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.063 [2024-11-20 07:23:20.371366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.063 qpair failed and we were unable to recover it. 00:27:16.063 [2024-11-20 07:23:20.371561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.063 [2024-11-20 07:23:20.371595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.063 qpair failed and we were unable to recover it. 00:27:16.064 [2024-11-20 07:23:20.371794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.064 [2024-11-20 07:23:20.371828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.064 qpair failed and we were unable to recover it. 00:27:16.064 [2024-11-20 07:23:20.372120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.064 [2024-11-20 07:23:20.372155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.064 qpair failed and we were unable to recover it. 00:27:16.064 [2024-11-20 07:23:20.372385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.064 [2024-11-20 07:23:20.372419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.064 qpair failed and we were unable to recover it. 00:27:16.064 [2024-11-20 07:23:20.372748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.064 [2024-11-20 07:23:20.372782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.064 qpair failed and we were unable to recover it. 00:27:16.064 [2024-11-20 07:23:20.373074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.064 [2024-11-20 07:23:20.373109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.064 qpair failed and we were unable to recover it. 00:27:16.064 [2024-11-20 07:23:20.373259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.064 [2024-11-20 07:23:20.373293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.064 qpair failed and we were unable to recover it. 00:27:16.064 [2024-11-20 07:23:20.373596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.064 [2024-11-20 07:23:20.373630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.064 qpair failed and we were unable to recover it. 00:27:16.064 [2024-11-20 07:23:20.373900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.064 [2024-11-20 07:23:20.373933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.064 qpair failed and we were unable to recover it. 00:27:16.064 [2024-11-20 07:23:20.374098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.064 [2024-11-20 07:23:20.374133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.064 qpair failed and we were unable to recover it. 00:27:16.064 [2024-11-20 07:23:20.374280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.064 [2024-11-20 07:23:20.374312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.064 qpair failed and we were unable to recover it. 00:27:16.064 [2024-11-20 07:23:20.374504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.064 [2024-11-20 07:23:20.374538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.064 qpair failed and we were unable to recover it. 00:27:16.064 [2024-11-20 07:23:20.374699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.064 [2024-11-20 07:23:20.374732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.064 qpair failed and we were unable to recover it. 00:27:16.064 [2024-11-20 07:23:20.375038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.064 [2024-11-20 07:23:20.375075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.064 qpair failed and we were unable to recover it. 00:27:16.064 [2024-11-20 07:23:20.375280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.064 [2024-11-20 07:23:20.375313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.064 qpair failed and we were unable to recover it. 00:27:16.064 [2024-11-20 07:23:20.375466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.064 [2024-11-20 07:23:20.375498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.064 qpair failed and we were unable to recover it. 00:27:16.064 [2024-11-20 07:23:20.375656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.064 [2024-11-20 07:23:20.375690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.064 qpair failed and we were unable to recover it. 00:27:16.064 [2024-11-20 07:23:20.375825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.064 [2024-11-20 07:23:20.375861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.064 qpair failed and we were unable to recover it. 00:27:16.064 [2024-11-20 07:23:20.376011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.064 [2024-11-20 07:23:20.376046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.064 qpair failed and we were unable to recover it. 00:27:16.064 [2024-11-20 07:23:20.376274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.064 [2024-11-20 07:23:20.376306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.064 qpair failed and we were unable to recover it. 00:27:16.064 [2024-11-20 07:23:20.376508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.064 [2024-11-20 07:23:20.376541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.064 qpair failed and we were unable to recover it. 00:27:16.064 [2024-11-20 07:23:20.376744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.064 [2024-11-20 07:23:20.376777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.064 qpair failed and we were unable to recover it. 00:27:16.064 [2024-11-20 07:23:20.376982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.064 [2024-11-20 07:23:20.377018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.064 qpair failed and we were unable to recover it. 00:27:16.064 [2024-11-20 07:23:20.377177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.064 [2024-11-20 07:23:20.377211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.064 qpair failed and we were unable to recover it. 00:27:16.064 [2024-11-20 07:23:20.377474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.064 [2024-11-20 07:23:20.377507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.064 qpair failed and we were unable to recover it. 00:27:16.064 [2024-11-20 07:23:20.377763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.064 [2024-11-20 07:23:20.377809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.064 qpair failed and we were unable to recover it. 00:27:16.064 [2024-11-20 07:23:20.378087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.064 [2024-11-20 07:23:20.378123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.064 qpair failed and we were unable to recover it. 00:27:16.064 [2024-11-20 07:23:20.378327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.064 [2024-11-20 07:23:20.378360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.064 qpair failed and we were unable to recover it. 00:27:16.064 [2024-11-20 07:23:20.378545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.064 [2024-11-20 07:23:20.378581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.064 qpair failed and we were unable to recover it. 00:27:16.064 [2024-11-20 07:23:20.378740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.064 [2024-11-20 07:23:20.378773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.064 qpair failed and we were unable to recover it. 00:27:16.064 [2024-11-20 07:23:20.378970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.064 [2024-11-20 07:23:20.379005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.064 qpair failed and we were unable to recover it. 00:27:16.064 [2024-11-20 07:23:20.379133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.064 [2024-11-20 07:23:20.379167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.064 qpair failed and we were unable to recover it. 00:27:16.064 [2024-11-20 07:23:20.379359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.065 [2024-11-20 07:23:20.379390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.065 qpair failed and we were unable to recover it. 00:27:16.065 [2024-11-20 07:23:20.379591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.065 [2024-11-20 07:23:20.379625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.065 qpair failed and we were unable to recover it. 00:27:16.065 [2024-11-20 07:23:20.379823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.065 [2024-11-20 07:23:20.379856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.065 qpair failed and we were unable to recover it. 00:27:16.065 [2024-11-20 07:23:20.379998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.065 [2024-11-20 07:23:20.380035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.065 qpair failed and we were unable to recover it. 00:27:16.065 [2024-11-20 07:23:20.380228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.065 [2024-11-20 07:23:20.380260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.065 qpair failed and we were unable to recover it. 00:27:16.065 [2024-11-20 07:23:20.380395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.065 [2024-11-20 07:23:20.380431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.065 qpair failed and we were unable to recover it. 00:27:16.065 [2024-11-20 07:23:20.380576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.065 [2024-11-20 07:23:20.380611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.065 qpair failed and we were unable to recover it. 00:27:16.065 [2024-11-20 07:23:20.380880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.065 [2024-11-20 07:23:20.380912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.065 qpair failed and we were unable to recover it. 00:27:16.065 [2024-11-20 07:23:20.381071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.065 [2024-11-20 07:23:20.381106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.065 qpair failed and we were unable to recover it. 00:27:16.065 [2024-11-20 07:23:20.381315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.065 [2024-11-20 07:23:20.381350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.065 qpair failed and we were unable to recover it. 00:27:16.065 [2024-11-20 07:23:20.381545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.065 [2024-11-20 07:23:20.381579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.065 qpair failed and we were unable to recover it. 00:27:16.065 [2024-11-20 07:23:20.381705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.065 [2024-11-20 07:23:20.381738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.065 qpair failed and we were unable to recover it. 00:27:16.065 [2024-11-20 07:23:20.381872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.065 [2024-11-20 07:23:20.381904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.065 qpair failed and we were unable to recover it. 00:27:16.065 [2024-11-20 07:23:20.382041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.065 [2024-11-20 07:23:20.382076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.065 qpair failed and we were unable to recover it. 00:27:16.065 [2024-11-20 07:23:20.382267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.065 [2024-11-20 07:23:20.382300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.065 qpair failed and we were unable to recover it. 00:27:16.065 [2024-11-20 07:23:20.382427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.065 [2024-11-20 07:23:20.382459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.065 qpair failed and we were unable to recover it. 00:27:16.065 [2024-11-20 07:23:20.382676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.065 [2024-11-20 07:23:20.382708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.065 qpair failed and we were unable to recover it. 00:27:16.065 [2024-11-20 07:23:20.382844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.065 [2024-11-20 07:23:20.382877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.065 qpair failed and we were unable to recover it. 00:27:16.065 [2024-11-20 07:23:20.383075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.065 [2024-11-20 07:23:20.383108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.065 qpair failed and we were unable to recover it. 00:27:16.065 [2024-11-20 07:23:20.383232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.065 [2024-11-20 07:23:20.383264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.065 qpair failed and we were unable to recover it. 00:27:16.065 [2024-11-20 07:23:20.383466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.065 [2024-11-20 07:23:20.383502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.065 qpair failed and we were unable to recover it. 00:27:16.065 [2024-11-20 07:23:20.383716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.065 [2024-11-20 07:23:20.383750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.065 qpair failed and we were unable to recover it. 00:27:16.065 [2024-11-20 07:23:20.383895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.065 [2024-11-20 07:23:20.383927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.065 qpair failed and we were unable to recover it. 00:27:16.065 [2024-11-20 07:23:20.384180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.065 [2024-11-20 07:23:20.384213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.065 qpair failed and we were unable to recover it. 00:27:16.065 [2024-11-20 07:23:20.384336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.065 [2024-11-20 07:23:20.384366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.065 qpair failed and we were unable to recover it. 00:27:16.065 [2024-11-20 07:23:20.384491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.065 [2024-11-20 07:23:20.384523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.065 qpair failed and we were unable to recover it. 00:27:16.065 [2024-11-20 07:23:20.384798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.065 [2024-11-20 07:23:20.384829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.065 qpair failed and we were unable to recover it. 00:27:16.065 [2024-11-20 07:23:20.384984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.065 [2024-11-20 07:23:20.385019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.065 qpair failed and we were unable to recover it. 00:27:16.065 [2024-11-20 07:23:20.385153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.065 [2024-11-20 07:23:20.385185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.065 qpair failed and we were unable to recover it. 00:27:16.065 [2024-11-20 07:23:20.385365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.065 [2024-11-20 07:23:20.385397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.065 qpair failed and we were unable to recover it. 00:27:16.065 [2024-11-20 07:23:20.385515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.065 [2024-11-20 07:23:20.385546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.065 qpair failed and we were unable to recover it. 00:27:16.065 [2024-11-20 07:23:20.385678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.065 [2024-11-20 07:23:20.385711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.066 qpair failed and we were unable to recover it. 00:27:16.066 [2024-11-20 07:23:20.385832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.066 [2024-11-20 07:23:20.385864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.066 qpair failed and we were unable to recover it. 00:27:16.066 [2024-11-20 07:23:20.386054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.066 [2024-11-20 07:23:20.386105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.066 qpair failed and we were unable to recover it. 00:27:16.066 [2024-11-20 07:23:20.386219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.066 [2024-11-20 07:23:20.386250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.066 qpair failed and we were unable to recover it. 00:27:16.066 [2024-11-20 07:23:20.386371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.066 [2024-11-20 07:23:20.386404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.066 qpair failed and we were unable to recover it. 00:27:16.066 [2024-11-20 07:23:20.386541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.066 [2024-11-20 07:23:20.386574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.066 qpair failed and we were unable to recover it. 00:27:16.066 [2024-11-20 07:23:20.386774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.066 [2024-11-20 07:23:20.386808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.066 qpair failed and we were unable to recover it. 00:27:16.066 [2024-11-20 07:23:20.387096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.066 [2024-11-20 07:23:20.387130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.066 qpair failed and we were unable to recover it. 00:27:16.066 [2024-11-20 07:23:20.387336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.066 [2024-11-20 07:23:20.387369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.066 qpair failed and we were unable to recover it. 00:27:16.066 [2024-11-20 07:23:20.387550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.066 [2024-11-20 07:23:20.387584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.066 qpair failed and we were unable to recover it. 00:27:16.066 [2024-11-20 07:23:20.387907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.066 [2024-11-20 07:23:20.387944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.066 qpair failed and we were unable to recover it. 00:27:16.066 [2024-11-20 07:23:20.388154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.066 [2024-11-20 07:23:20.388189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.066 qpair failed and we were unable to recover it. 00:27:16.066 [2024-11-20 07:23:20.388338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.066 [2024-11-20 07:23:20.388372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.066 qpair failed and we were unable to recover it. 00:27:16.066 [2024-11-20 07:23:20.388584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.066 [2024-11-20 07:23:20.388627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.066 qpair failed and we were unable to recover it. 00:27:16.066 [2024-11-20 07:23:20.388834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.066 [2024-11-20 07:23:20.388867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.066 qpair failed and we were unable to recover it. 00:27:16.066 [2024-11-20 07:23:20.388983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.066 [2024-11-20 07:23:20.389016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.066 qpair failed and we were unable to recover it. 00:27:16.066 [2024-11-20 07:23:20.389162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.066 [2024-11-20 07:23:20.389193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.066 qpair failed and we were unable to recover it. 00:27:16.066 [2024-11-20 07:23:20.389302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.066 [2024-11-20 07:23:20.389336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.066 qpair failed and we were unable to recover it. 00:27:16.066 [2024-11-20 07:23:20.389523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.066 [2024-11-20 07:23:20.389556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.066 qpair failed and we were unable to recover it. 00:27:16.066 [2024-11-20 07:23:20.389830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.066 [2024-11-20 07:23:20.389865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.066 qpair failed and we were unable to recover it. 00:27:16.066 [2024-11-20 07:23:20.390065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.066 [2024-11-20 07:23:20.390100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.066 qpair failed and we were unable to recover it. 00:27:16.066 [2024-11-20 07:23:20.390293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.066 [2024-11-20 07:23:20.390324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.066 qpair failed and we were unable to recover it. 00:27:16.066 [2024-11-20 07:23:20.390446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.066 [2024-11-20 07:23:20.390482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.066 qpair failed and we were unable to recover it. 00:27:16.066 [2024-11-20 07:23:20.390620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.066 [2024-11-20 07:23:20.390653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.066 qpair failed and we were unable to recover it. 00:27:16.066 [2024-11-20 07:23:20.390842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.066 [2024-11-20 07:23:20.390876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.066 qpair failed and we were unable to recover it. 00:27:16.066 [2024-11-20 07:23:20.391079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.066 [2024-11-20 07:23:20.391114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.066 qpair failed and we were unable to recover it. 00:27:16.066 [2024-11-20 07:23:20.391373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.066 [2024-11-20 07:23:20.391409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.066 qpair failed and we were unable to recover it. 00:27:16.066 [2024-11-20 07:23:20.391554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.067 [2024-11-20 07:23:20.391587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.067 qpair failed and we were unable to recover it. 00:27:16.067 [2024-11-20 07:23:20.391721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.067 [2024-11-20 07:23:20.391753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.067 qpair failed and we were unable to recover it. 00:27:16.067 [2024-11-20 07:23:20.391970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.067 [2024-11-20 07:23:20.392007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.067 qpair failed and we were unable to recover it. 00:27:16.067 [2024-11-20 07:23:20.392133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.067 [2024-11-20 07:23:20.392168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.067 qpair failed and we were unable to recover it. 00:27:16.067 [2024-11-20 07:23:20.392364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.067 [2024-11-20 07:23:20.392397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.067 qpair failed and we were unable to recover it. 00:27:16.067 [2024-11-20 07:23:20.392514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.067 [2024-11-20 07:23:20.392547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.067 qpair failed and we were unable to recover it. 00:27:16.067 [2024-11-20 07:23:20.392735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.067 [2024-11-20 07:23:20.392769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.067 qpair failed and we were unable to recover it. 00:27:16.067 [2024-11-20 07:23:20.392966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.067 [2024-11-20 07:23:20.393001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.067 qpair failed and we were unable to recover it. 00:27:16.067 [2024-11-20 07:23:20.393187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.067 [2024-11-20 07:23:20.393221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.067 qpair failed and we were unable to recover it. 00:27:16.067 [2024-11-20 07:23:20.393347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.067 [2024-11-20 07:23:20.393380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.067 qpair failed and we were unable to recover it. 00:27:16.067 [2024-11-20 07:23:20.393525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.067 [2024-11-20 07:23:20.393558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.067 qpair failed and we were unable to recover it. 00:27:16.067 [2024-11-20 07:23:20.393706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.067 [2024-11-20 07:23:20.393738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.067 qpair failed and we were unable to recover it. 00:27:16.067 [2024-11-20 07:23:20.393964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.067 [2024-11-20 07:23:20.394000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.067 qpair failed and we were unable to recover it. 00:27:16.067 [2024-11-20 07:23:20.394193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.067 [2024-11-20 07:23:20.394227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.067 qpair failed and we were unable to recover it. 00:27:16.067 [2024-11-20 07:23:20.394409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.067 [2024-11-20 07:23:20.394440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.067 qpair failed and we were unable to recover it. 00:27:16.067 [2024-11-20 07:23:20.394570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.067 [2024-11-20 07:23:20.394603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.067 qpair failed and we were unable to recover it. 00:27:16.067 [2024-11-20 07:23:20.394735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.067 [2024-11-20 07:23:20.394766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.067 qpair failed and we were unable to recover it. 00:27:16.067 [2024-11-20 07:23:20.394892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.067 [2024-11-20 07:23:20.394925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.067 qpair failed and we were unable to recover it. 00:27:16.067 [2024-11-20 07:23:20.395068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.067 [2024-11-20 07:23:20.395100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.067 qpair failed and we were unable to recover it. 00:27:16.067 [2024-11-20 07:23:20.395303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.067 [2024-11-20 07:23:20.395335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.067 qpair failed and we were unable to recover it. 00:27:16.067 [2024-11-20 07:23:20.395521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.067 [2024-11-20 07:23:20.395555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.067 qpair failed and we were unable to recover it. 00:27:16.067 [2024-11-20 07:23:20.395751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.067 [2024-11-20 07:23:20.395784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.067 qpair failed and we were unable to recover it. 00:27:16.067 [2024-11-20 07:23:20.396005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.067 [2024-11-20 07:23:20.396040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.067 qpair failed and we were unable to recover it. 00:27:16.067 [2024-11-20 07:23:20.396162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.067 [2024-11-20 07:23:20.396197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.067 qpair failed and we were unable to recover it. 00:27:16.067 [2024-11-20 07:23:20.396392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.067 [2024-11-20 07:23:20.396423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.067 qpair failed and we were unable to recover it. 00:27:16.067 [2024-11-20 07:23:20.396624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.067 [2024-11-20 07:23:20.396657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.067 qpair failed and we were unable to recover it. 00:27:16.067 [2024-11-20 07:23:20.396793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.067 [2024-11-20 07:23:20.396825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.067 qpair failed and we were unable to recover it. 00:27:16.067 [2024-11-20 07:23:20.396941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.067 [2024-11-20 07:23:20.396983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.067 qpair failed and we were unable to recover it. 00:27:16.067 [2024-11-20 07:23:20.397117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.067 [2024-11-20 07:23:20.397147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.067 qpair failed and we were unable to recover it. 00:27:16.067 [2024-11-20 07:23:20.397416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.067 [2024-11-20 07:23:20.397450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.067 qpair failed and we were unable to recover it. 00:27:16.067 [2024-11-20 07:23:20.397569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.067 [2024-11-20 07:23:20.397603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.067 qpair failed and we were unable to recover it. 00:27:16.067 [2024-11-20 07:23:20.397723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.068 [2024-11-20 07:23:20.397758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.068 qpair failed and we were unable to recover it. 00:27:16.068 [2024-11-20 07:23:20.397872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.068 [2024-11-20 07:23:20.397904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.068 qpair failed and we were unable to recover it. 00:27:16.068 [2024-11-20 07:23:20.398120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.068 [2024-11-20 07:23:20.398152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.068 qpair failed and we were unable to recover it. 00:27:16.068 [2024-11-20 07:23:20.398365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.068 [2024-11-20 07:23:20.398398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.068 qpair failed and we were unable to recover it. 00:27:16.068 [2024-11-20 07:23:20.398512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.068 [2024-11-20 07:23:20.398546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.068 qpair failed and we were unable to recover it. 00:27:16.068 [2024-11-20 07:23:20.398685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.068 [2024-11-20 07:23:20.398720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.068 qpair failed and we were unable to recover it. 00:27:16.068 [2024-11-20 07:23:20.398936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.068 [2024-11-20 07:23:20.398988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.068 qpair failed and we were unable to recover it. 00:27:16.068 [2024-11-20 07:23:20.399147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.068 [2024-11-20 07:23:20.399182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.068 qpair failed and we were unable to recover it. 00:27:16.068 [2024-11-20 07:23:20.399410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.068 [2024-11-20 07:23:20.399443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.068 qpair failed and we were unable to recover it. 00:27:16.068 [2024-11-20 07:23:20.399638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.068 [2024-11-20 07:23:20.399672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.068 qpair failed and we were unable to recover it. 00:27:16.068 [2024-11-20 07:23:20.399784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.068 [2024-11-20 07:23:20.399815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.068 qpair failed and we were unable to recover it. 00:27:16.068 [2024-11-20 07:23:20.399972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.068 [2024-11-20 07:23:20.400013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.068 qpair failed and we were unable to recover it. 00:27:16.068 [2024-11-20 07:23:20.400198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.068 [2024-11-20 07:23:20.400229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.068 qpair failed and we were unable to recover it. 00:27:16.068 [2024-11-20 07:23:20.400439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.068 [2024-11-20 07:23:20.400472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.068 qpair failed and we were unable to recover it. 00:27:16.068 [2024-11-20 07:23:20.400746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.068 [2024-11-20 07:23:20.400778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.068 qpair failed and we were unable to recover it. 00:27:16.068 [2024-11-20 07:23:20.400906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.068 [2024-11-20 07:23:20.400937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.068 qpair failed and we were unable to recover it. 00:27:16.068 [2024-11-20 07:23:20.401174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.068 [2024-11-20 07:23:20.401207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.068 qpair failed and we were unable to recover it. 00:27:16.068 [2024-11-20 07:23:20.401344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.068 [2024-11-20 07:23:20.401378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.068 qpair failed and we were unable to recover it. 00:27:16.068 [2024-11-20 07:23:20.401486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.068 [2024-11-20 07:23:20.401518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.068 qpair failed and we were unable to recover it. 00:27:16.068 [2024-11-20 07:23:20.401644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.068 [2024-11-20 07:23:20.401677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.068 qpair failed and we were unable to recover it. 00:27:16.068 [2024-11-20 07:23:20.401870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.068 [2024-11-20 07:23:20.401904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.068 qpair failed and we were unable to recover it. 00:27:16.068 [2024-11-20 07:23:20.402172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.068 [2024-11-20 07:23:20.402208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.068 qpair failed and we were unable to recover it. 00:27:16.068 [2024-11-20 07:23:20.402407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.068 [2024-11-20 07:23:20.402443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.068 qpair failed and we were unable to recover it. 00:27:16.068 [2024-11-20 07:23:20.402635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.068 [2024-11-20 07:23:20.402668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.068 qpair failed and we were unable to recover it. 00:27:16.068 [2024-11-20 07:23:20.402805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.068 [2024-11-20 07:23:20.402839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.068 qpair failed and we were unable to recover it. 00:27:16.068 [2024-11-20 07:23:20.402973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.068 [2024-11-20 07:23:20.403008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.068 qpair failed and we were unable to recover it. 00:27:16.068 [2024-11-20 07:23:20.403189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.068 [2024-11-20 07:23:20.403224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.068 qpair failed and we were unable to recover it. 00:27:16.068 [2024-11-20 07:23:20.403360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.068 [2024-11-20 07:23:20.403395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.068 qpair failed and we were unable to recover it. 00:27:16.068 [2024-11-20 07:23:20.403516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.068 [2024-11-20 07:23:20.403547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.068 qpair failed and we were unable to recover it. 00:27:16.068 [2024-11-20 07:23:20.403798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.068 [2024-11-20 07:23:20.403832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.068 qpair failed and we were unable to recover it. 00:27:16.068 [2024-11-20 07:23:20.404113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.068 [2024-11-20 07:23:20.404147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.068 qpair failed and we were unable to recover it. 00:27:16.068 [2024-11-20 07:23:20.404329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.069 [2024-11-20 07:23:20.404361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.069 qpair failed and we were unable to recover it. 00:27:16.069 [2024-11-20 07:23:20.404540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.069 [2024-11-20 07:23:20.404573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.069 qpair failed and we were unable to recover it. 00:27:16.069 [2024-11-20 07:23:20.404758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.069 [2024-11-20 07:23:20.404790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.069 qpair failed and we were unable to recover it. 00:27:16.069 [2024-11-20 07:23:20.404989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.069 [2024-11-20 07:23:20.405023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.069 qpair failed and we were unable to recover it. 00:27:16.069 [2024-11-20 07:23:20.405208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.069 [2024-11-20 07:23:20.405240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.069 qpair failed and we were unable to recover it. 00:27:16.069 [2024-11-20 07:23:20.405365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.069 [2024-11-20 07:23:20.405398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.069 qpair failed and we were unable to recover it. 00:27:16.069 [2024-11-20 07:23:20.405534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.069 [2024-11-20 07:23:20.405567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.069 qpair failed and we were unable to recover it. 00:27:16.069 [2024-11-20 07:23:20.405751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.069 [2024-11-20 07:23:20.405785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.069 qpair failed and we were unable to recover it. 00:27:16.069 [2024-11-20 07:23:20.406038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.069 [2024-11-20 07:23:20.406073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.069 qpair failed and we were unable to recover it. 00:27:16.069 [2024-11-20 07:23:20.406311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.069 [2024-11-20 07:23:20.406344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.069 qpair failed and we were unable to recover it. 00:27:16.069 [2024-11-20 07:23:20.406639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.069 [2024-11-20 07:23:20.406673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.069 qpair failed and we were unable to recover it. 00:27:16.069 [2024-11-20 07:23:20.406974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.069 [2024-11-20 07:23:20.407009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.069 qpair failed and we were unable to recover it. 00:27:16.069 [2024-11-20 07:23:20.407210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.069 [2024-11-20 07:23:20.407243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.069 qpair failed and we were unable to recover it. 00:27:16.069 [2024-11-20 07:23:20.407398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.069 [2024-11-20 07:23:20.407431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.069 qpair failed and we were unable to recover it. 00:27:16.069 [2024-11-20 07:23:20.407708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.069 [2024-11-20 07:23:20.407741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.069 qpair failed and we were unable to recover it. 00:27:16.069 [2024-11-20 07:23:20.408017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.069 [2024-11-20 07:23:20.408051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.069 qpair failed and we were unable to recover it. 00:27:16.069 [2024-11-20 07:23:20.408240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.069 [2024-11-20 07:23:20.408275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.069 qpair failed and we were unable to recover it. 00:27:16.069 [2024-11-20 07:23:20.408530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.069 [2024-11-20 07:23:20.408564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.069 qpair failed and we were unable to recover it. 00:27:16.069 [2024-11-20 07:23:20.408819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.069 [2024-11-20 07:23:20.408853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.069 qpair failed and we were unable to recover it. 00:27:16.069 [2024-11-20 07:23:20.409073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.069 [2024-11-20 07:23:20.409108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.069 qpair failed and we were unable to recover it. 00:27:16.069 [2024-11-20 07:23:20.409316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.069 [2024-11-20 07:23:20.409360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.069 qpair failed and we were unable to recover it. 00:27:16.069 [2024-11-20 07:23:20.409503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.069 [2024-11-20 07:23:20.409537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.069 qpair failed and we were unable to recover it. 00:27:16.069 [2024-11-20 07:23:20.409834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.069 [2024-11-20 07:23:20.409866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.069 qpair failed and we were unable to recover it. 00:27:16.069 [2024-11-20 07:23:20.410141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.069 [2024-11-20 07:23:20.410177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.069 qpair failed and we were unable to recover it. 00:27:16.069 [2024-11-20 07:23:20.410376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.069 [2024-11-20 07:23:20.410408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.069 qpair failed and we were unable to recover it. 00:27:16.069 [2024-11-20 07:23:20.410549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.069 [2024-11-20 07:23:20.410583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.069 qpair failed and we were unable to recover it. 00:27:16.069 [2024-11-20 07:23:20.410729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.069 [2024-11-20 07:23:20.410760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.069 qpair failed and we were unable to recover it. 00:27:16.069 [2024-11-20 07:23:20.410943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.069 [2024-11-20 07:23:20.410993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.069 qpair failed and we were unable to recover it. 00:27:16.069 [2024-11-20 07:23:20.411202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.069 [2024-11-20 07:23:20.411236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.069 qpair failed and we were unable to recover it. 00:27:16.069 [2024-11-20 07:23:20.411387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.069 [2024-11-20 07:23:20.411420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.069 qpair failed and we were unable to recover it. 00:27:16.069 [2024-11-20 07:23:20.411718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.069 [2024-11-20 07:23:20.411751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.069 qpair failed and we were unable to recover it. 00:27:16.069 [2024-11-20 07:23:20.411934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.069 [2024-11-20 07:23:20.411983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.069 qpair failed and we were unable to recover it. 00:27:16.070 [2024-11-20 07:23:20.412151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.070 [2024-11-20 07:23:20.412183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.070 qpair failed and we were unable to recover it. 00:27:16.070 [2024-11-20 07:23:20.412383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.070 [2024-11-20 07:23:20.412417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.070 qpair failed and we were unable to recover it. 00:27:16.070 [2024-11-20 07:23:20.412736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.070 [2024-11-20 07:23:20.412769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.070 qpair failed and we were unable to recover it. 00:27:16.070 [2024-11-20 07:23:20.413072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.070 [2024-11-20 07:23:20.413105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.070 qpair failed and we were unable to recover it. 00:27:16.070 [2024-11-20 07:23:20.413378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.070 [2024-11-20 07:23:20.413411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.070 qpair failed and we were unable to recover it. 00:27:16.070 [2024-11-20 07:23:20.413642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.070 [2024-11-20 07:23:20.413676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.070 qpair failed and we were unable to recover it. 00:27:16.070 [2024-11-20 07:23:20.413857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.070 [2024-11-20 07:23:20.413891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.070 qpair failed and we were unable to recover it. 00:27:16.070 [2024-11-20 07:23:20.414106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.070 [2024-11-20 07:23:20.414141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.070 qpair failed and we were unable to recover it. 00:27:16.070 [2024-11-20 07:23:20.414298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.070 [2024-11-20 07:23:20.414333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.070 qpair failed and we were unable to recover it. 00:27:16.070 [2024-11-20 07:23:20.414547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.070 [2024-11-20 07:23:20.414581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.070 qpair failed and we were unable to recover it. 00:27:16.070 [2024-11-20 07:23:20.414775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.070 [2024-11-20 07:23:20.414810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.070 qpair failed and we were unable to recover it. 00:27:16.070 [2024-11-20 07:23:20.415009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.070 [2024-11-20 07:23:20.415042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.070 qpair failed and we were unable to recover it. 00:27:16.070 [2024-11-20 07:23:20.415271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.070 [2024-11-20 07:23:20.415305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.070 qpair failed and we were unable to recover it. 00:27:16.070 [2024-11-20 07:23:20.415505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.070 [2024-11-20 07:23:20.415539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.070 qpair failed and we were unable to recover it. 00:27:16.070 [2024-11-20 07:23:20.415794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.070 [2024-11-20 07:23:20.415827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.070 qpair failed and we were unable to recover it. 00:27:16.070 [2024-11-20 07:23:20.416117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.070 [2024-11-20 07:23:20.416150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.070 qpair failed and we were unable to recover it. 00:27:16.070 [2024-11-20 07:23:20.416367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.070 [2024-11-20 07:23:20.416399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.070 qpair failed and we were unable to recover it. 00:27:16.070 [2024-11-20 07:23:20.416582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.070 [2024-11-20 07:23:20.416616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.070 qpair failed and we were unable to recover it. 00:27:16.070 [2024-11-20 07:23:20.416821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.070 [2024-11-20 07:23:20.416855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.070 qpair failed and we were unable to recover it. 00:27:16.070 [2024-11-20 07:23:20.417064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.070 [2024-11-20 07:23:20.417099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.070 qpair failed and we were unable to recover it. 00:27:16.070 [2024-11-20 07:23:20.417223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.070 [2024-11-20 07:23:20.417255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.070 qpair failed and we were unable to recover it. 00:27:16.070 [2024-11-20 07:23:20.417366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.070 [2024-11-20 07:23:20.417399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.070 qpair failed and we were unable to recover it. 00:27:16.070 [2024-11-20 07:23:20.417540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.070 [2024-11-20 07:23:20.417572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.070 qpair failed and we were unable to recover it. 00:27:16.070 [2024-11-20 07:23:20.417711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.070 [2024-11-20 07:23:20.417744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.070 qpair failed and we were unable to recover it. 00:27:16.070 [2024-11-20 07:23:20.417904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.070 [2024-11-20 07:23:20.417937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.070 qpair failed and we were unable to recover it. 00:27:16.070 [2024-11-20 07:23:20.418209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.070 [2024-11-20 07:23:20.418243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.070 qpair failed and we were unable to recover it. 00:27:16.070 [2024-11-20 07:23:20.418442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.070 [2024-11-20 07:23:20.418476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.070 qpair failed and we were unable to recover it. 00:27:16.070 [2024-11-20 07:23:20.418668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.070 [2024-11-20 07:23:20.418701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.070 qpair failed and we were unable to recover it. 00:27:16.070 [2024-11-20 07:23:20.418880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.070 [2024-11-20 07:23:20.418918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.070 qpair failed and we were unable to recover it. 00:27:16.070 [2024-11-20 07:23:20.419132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.070 [2024-11-20 07:23:20.419162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.070 qpair failed and we were unable to recover it. 00:27:16.070 [2024-11-20 07:23:20.419303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.070 [2024-11-20 07:23:20.419334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.070 qpair failed and we were unable to recover it. 00:27:16.070 [2024-11-20 07:23:20.419470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.071 [2024-11-20 07:23:20.419503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.071 qpair failed and we were unable to recover it. 00:27:16.071 [2024-11-20 07:23:20.419646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.071 [2024-11-20 07:23:20.419677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.071 qpair failed and we were unable to recover it. 00:27:16.071 [2024-11-20 07:23:20.419805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.071 [2024-11-20 07:23:20.419837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.071 qpair failed and we were unable to recover it. 00:27:16.071 [2024-11-20 07:23:20.420038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.071 [2024-11-20 07:23:20.420075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.071 qpair failed and we were unable to recover it. 00:27:16.071 [2024-11-20 07:23:20.420266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.071 [2024-11-20 07:23:20.420300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.071 qpair failed and we were unable to recover it. 00:27:16.071 [2024-11-20 07:23:20.420520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.071 [2024-11-20 07:23:20.420553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.071 qpair failed and we were unable to recover it. 00:27:16.071 [2024-11-20 07:23:20.420745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.071 [2024-11-20 07:23:20.420778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.071 qpair failed and we were unable to recover it. 00:27:16.071 [2024-11-20 07:23:20.420899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.071 [2024-11-20 07:23:20.420931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.071 qpair failed and we were unable to recover it. 00:27:16.071 [2024-11-20 07:23:20.421085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.071 [2024-11-20 07:23:20.421119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.071 qpair failed and we were unable to recover it. 00:27:16.071 [2024-11-20 07:23:20.421256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.071 [2024-11-20 07:23:20.421288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.071 qpair failed and we were unable to recover it. 00:27:16.071 [2024-11-20 07:23:20.421395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.071 [2024-11-20 07:23:20.421427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.071 qpair failed and we were unable to recover it. 00:27:16.071 [2024-11-20 07:23:20.421633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.071 [2024-11-20 07:23:20.421664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.071 qpair failed and we were unable to recover it. 00:27:16.071 [2024-11-20 07:23:20.421787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.071 [2024-11-20 07:23:20.421820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.071 qpair failed and we were unable to recover it. 00:27:16.071 [2024-11-20 07:23:20.422014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.071 [2024-11-20 07:23:20.422050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.071 qpair failed and we were unable to recover it. 00:27:16.071 [2024-11-20 07:23:20.422173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.071 [2024-11-20 07:23:20.422205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.071 qpair failed and we were unable to recover it. 00:27:16.071 [2024-11-20 07:23:20.422323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.071 [2024-11-20 07:23:20.422356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.071 qpair failed and we were unable to recover it. 00:27:16.071 [2024-11-20 07:23:20.422474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.071 [2024-11-20 07:23:20.422506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.071 qpair failed and we were unable to recover it. 00:27:16.071 [2024-11-20 07:23:20.422647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.071 [2024-11-20 07:23:20.422680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.071 qpair failed and we were unable to recover it. 00:27:16.071 [2024-11-20 07:23:20.422933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.071 [2024-11-20 07:23:20.422982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.071 qpair failed and we were unable to recover it. 00:27:16.071 [2024-11-20 07:23:20.423108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.071 [2024-11-20 07:23:20.423143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.071 qpair failed and we were unable to recover it. 00:27:16.071 [2024-11-20 07:23:20.423334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.071 [2024-11-20 07:23:20.423368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.071 qpair failed and we were unable to recover it. 00:27:16.071 [2024-11-20 07:23:20.423493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.071 [2024-11-20 07:23:20.423525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.071 qpair failed and we were unable to recover it. 00:27:16.071 [2024-11-20 07:23:20.423678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.071 [2024-11-20 07:23:20.423709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.071 qpair failed and we were unable to recover it. 00:27:16.071 [2024-11-20 07:23:20.423853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.071 [2024-11-20 07:23:20.423886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.071 qpair failed and we were unable to recover it. 00:27:16.071 [2024-11-20 07:23:20.424097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.071 [2024-11-20 07:23:20.424132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.071 qpair failed and we were unable to recover it. 00:27:16.071 [2024-11-20 07:23:20.424339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.071 [2024-11-20 07:23:20.424373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.071 qpair failed and we were unable to recover it. 00:27:16.071 [2024-11-20 07:23:20.424503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.071 [2024-11-20 07:23:20.424534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.071 qpair failed and we were unable to recover it. 00:27:16.071 [2024-11-20 07:23:20.424755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.071 [2024-11-20 07:23:20.424788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.072 qpair failed and we were unable to recover it. 00:27:16.072 [2024-11-20 07:23:20.424992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.072 [2024-11-20 07:23:20.425026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.072 qpair failed and we were unable to recover it. 00:27:16.072 [2024-11-20 07:23:20.425170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.072 [2024-11-20 07:23:20.425203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.072 qpair failed and we were unable to recover it. 00:27:16.072 [2024-11-20 07:23:20.425321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.072 [2024-11-20 07:23:20.425353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.072 qpair failed and we were unable to recover it. 00:27:16.072 [2024-11-20 07:23:20.425494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.072 [2024-11-20 07:23:20.425526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.072 qpair failed and we were unable to recover it. 00:27:16.072 [2024-11-20 07:23:20.425705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.072 [2024-11-20 07:23:20.425740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.072 qpair failed and we were unable to recover it. 00:27:16.072 [2024-11-20 07:23:20.425899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.072 [2024-11-20 07:23:20.425932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.072 qpair failed and we were unable to recover it. 00:27:16.072 [2024-11-20 07:23:20.426136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.072 [2024-11-20 07:23:20.426169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.072 qpair failed and we were unable to recover it. 00:27:16.072 [2024-11-20 07:23:20.426347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.072 [2024-11-20 07:23:20.426379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.072 qpair failed and we were unable to recover it. 00:27:16.072 [2024-11-20 07:23:20.426587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.072 [2024-11-20 07:23:20.426619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.072 qpair failed and we were unable to recover it. 00:27:16.072 [2024-11-20 07:23:20.426875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.072 [2024-11-20 07:23:20.426914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.072 qpair failed and we were unable to recover it. 00:27:16.072 [2024-11-20 07:23:20.427131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.072 [2024-11-20 07:23:20.427165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.072 qpair failed and we were unable to recover it. 00:27:16.072 [2024-11-20 07:23:20.427442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.072 [2024-11-20 07:23:20.427476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.072 qpair failed and we were unable to recover it. 00:27:16.072 [2024-11-20 07:23:20.427680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.072 [2024-11-20 07:23:20.427714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.072 qpair failed and we were unable to recover it. 00:27:16.072 [2024-11-20 07:23:20.427918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.072 [2024-11-20 07:23:20.427961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.072 qpair failed and we were unable to recover it. 00:27:16.072 [2024-11-20 07:23:20.428098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.072 [2024-11-20 07:23:20.428133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.072 qpair failed and we were unable to recover it. 00:27:16.072 [2024-11-20 07:23:20.428344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.072 [2024-11-20 07:23:20.428381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.072 qpair failed and we were unable to recover it. 00:27:16.072 [2024-11-20 07:23:20.428492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.072 [2024-11-20 07:23:20.428525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.072 qpair failed and we were unable to recover it. 00:27:16.072 [2024-11-20 07:23:20.428655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.072 [2024-11-20 07:23:20.428689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.072 qpair failed and we were unable to recover it. 00:27:16.072 [2024-11-20 07:23:20.428818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.072 [2024-11-20 07:23:20.428851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.072 qpair failed and we were unable to recover it. 00:27:16.072 [2024-11-20 07:23:20.429001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.072 [2024-11-20 07:23:20.429036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.072 qpair failed and we were unable to recover it. 00:27:16.072 [2024-11-20 07:23:20.429156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.072 [2024-11-20 07:23:20.429188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.072 qpair failed and we were unable to recover it. 00:27:16.072 [2024-11-20 07:23:20.429372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.072 [2024-11-20 07:23:20.429405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.072 qpair failed and we were unable to recover it. 00:27:16.072 [2024-11-20 07:23:20.429529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.072 [2024-11-20 07:23:20.429563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.072 qpair failed and we were unable to recover it. 00:27:16.072 [2024-11-20 07:23:20.429779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.072 [2024-11-20 07:23:20.429810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.072 qpair failed and we were unable to recover it. 00:27:16.072 [2024-11-20 07:23:20.430016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.072 [2024-11-20 07:23:20.430052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.072 qpair failed and we were unable to recover it. 00:27:16.072 [2024-11-20 07:23:20.430178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.072 [2024-11-20 07:23:20.430212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.072 qpair failed and we were unable to recover it. 00:27:16.072 [2024-11-20 07:23:20.430326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.072 [2024-11-20 07:23:20.430357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.072 qpair failed and we were unable to recover it. 00:27:16.072 [2024-11-20 07:23:20.430544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.072 [2024-11-20 07:23:20.430576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.072 qpair failed and we were unable to recover it. 00:27:16.072 [2024-11-20 07:23:20.430683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.072 [2024-11-20 07:23:20.430716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.072 qpair failed and we were unable to recover it. 00:27:16.072 [2024-11-20 07:23:20.430845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.072 [2024-11-20 07:23:20.430877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.072 qpair failed and we were unable to recover it. 00:27:16.072 [2024-11-20 07:23:20.431010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.073 [2024-11-20 07:23:20.431048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.073 qpair failed and we were unable to recover it. 00:27:16.073 [2024-11-20 07:23:20.431261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.073 [2024-11-20 07:23:20.431296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.073 qpair failed and we were unable to recover it. 00:27:16.073 [2024-11-20 07:23:20.431524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.073 [2024-11-20 07:23:20.431557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.073 qpair failed and we were unable to recover it. 00:27:16.073 [2024-11-20 07:23:20.431679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.073 [2024-11-20 07:23:20.431711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.073 qpair failed and we were unable to recover it. 00:27:16.073 [2024-11-20 07:23:20.431900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.073 [2024-11-20 07:23:20.431932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.073 qpair failed and we were unable to recover it. 00:27:16.073 [2024-11-20 07:23:20.432066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.073 [2024-11-20 07:23:20.432099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.073 qpair failed and we were unable to recover it. 00:27:16.073 [2024-11-20 07:23:20.432240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.073 [2024-11-20 07:23:20.432274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.073 qpair failed and we were unable to recover it. 00:27:16.073 [2024-11-20 07:23:20.432478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.073 [2024-11-20 07:23:20.432510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.073 qpair failed and we were unable to recover it. 00:27:16.073 [2024-11-20 07:23:20.432638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.073 [2024-11-20 07:23:20.432671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.073 qpair failed and we were unable to recover it. 00:27:16.073 [2024-11-20 07:23:20.432838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.073 [2024-11-20 07:23:20.432869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.073 qpair failed and we were unable to recover it. 00:27:16.073 [2024-11-20 07:23:20.433017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.073 [2024-11-20 07:23:20.433053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.073 qpair failed and we were unable to recover it. 00:27:16.073 [2024-11-20 07:23:20.433319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.073 [2024-11-20 07:23:20.433352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.073 qpair failed and we were unable to recover it. 00:27:16.073 [2024-11-20 07:23:20.433469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.073 [2024-11-20 07:23:20.433504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.073 qpair failed and we were unable to recover it. 00:27:16.073 [2024-11-20 07:23:20.433624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.073 [2024-11-20 07:23:20.433656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.073 qpair failed and we were unable to recover it. 00:27:16.073 [2024-11-20 07:23:20.433787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.073 [2024-11-20 07:23:20.433820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.073 qpair failed and we were unable to recover it. 00:27:16.073 [2024-11-20 07:23:20.433944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.073 [2024-11-20 07:23:20.433987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.073 qpair failed and we were unable to recover it. 00:27:16.073 [2024-11-20 07:23:20.434098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.073 [2024-11-20 07:23:20.434129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.073 qpair failed and we were unable to recover it. 00:27:16.073 [2024-11-20 07:23:20.434253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.073 [2024-11-20 07:23:20.434285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.073 qpair failed and we were unable to recover it. 00:27:16.073 [2024-11-20 07:23:20.434412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.073 [2024-11-20 07:23:20.434442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.073 qpair failed and we were unable to recover it. 00:27:16.073 [2024-11-20 07:23:20.434623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.073 [2024-11-20 07:23:20.434661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.073 qpair failed and we were unable to recover it. 00:27:16.073 [2024-11-20 07:23:20.434851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.073 [2024-11-20 07:23:20.434883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.073 qpair failed and we were unable to recover it. 00:27:16.073 [2024-11-20 07:23:20.435073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.073 [2024-11-20 07:23:20.435106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.073 qpair failed and we were unable to recover it. 00:27:16.073 [2024-11-20 07:23:20.435295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.073 [2024-11-20 07:23:20.435327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.073 qpair failed and we were unable to recover it. 00:27:16.073 [2024-11-20 07:23:20.435520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.073 [2024-11-20 07:23:20.435551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.073 qpair failed and we were unable to recover it. 00:27:16.073 [2024-11-20 07:23:20.435676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.073 [2024-11-20 07:23:20.435708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.073 qpair failed and we were unable to recover it. 00:27:16.073 [2024-11-20 07:23:20.435896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.073 [2024-11-20 07:23:20.435928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.073 qpair failed and we were unable to recover it. 00:27:16.073 [2024-11-20 07:23:20.436142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.073 [2024-11-20 07:23:20.436175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.073 qpair failed and we were unable to recover it. 00:27:16.073 [2024-11-20 07:23:20.436284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.073 [2024-11-20 07:23:20.436314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.073 qpair failed and we were unable to recover it. 00:27:16.073 [2024-11-20 07:23:20.436637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.073 [2024-11-20 07:23:20.436668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.073 qpair failed and we were unable to recover it. 00:27:16.073 [2024-11-20 07:23:20.436845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.073 [2024-11-20 07:23:20.436877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.073 qpair failed and we were unable to recover it. 00:27:16.073 [2024-11-20 07:23:20.437080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.073 [2024-11-20 07:23:20.437114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.073 qpair failed and we were unable to recover it. 00:27:16.073 [2024-11-20 07:23:20.437316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.073 [2024-11-20 07:23:20.437348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.073 qpair failed and we were unable to recover it. 00:27:16.074 [2024-11-20 07:23:20.437502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.074 [2024-11-20 07:23:20.437535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.074 qpair failed and we were unable to recover it. 00:27:16.074 [2024-11-20 07:23:20.437746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.074 [2024-11-20 07:23:20.437778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.074 qpair failed and we were unable to recover it. 00:27:16.074 [2024-11-20 07:23:20.437988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.074 [2024-11-20 07:23:20.438022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.074 qpair failed and we were unable to recover it. 00:27:16.074 [2024-11-20 07:23:20.438169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.074 [2024-11-20 07:23:20.438200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.074 qpair failed and we were unable to recover it. 00:27:16.074 [2024-11-20 07:23:20.438335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.074 [2024-11-20 07:23:20.438368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.074 qpair failed and we were unable to recover it. 00:27:16.074 [2024-11-20 07:23:20.438498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.074 [2024-11-20 07:23:20.438530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.074 qpair failed and we were unable to recover it. 00:27:16.074 [2024-11-20 07:23:20.438702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.074 [2024-11-20 07:23:20.438734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.074 qpair failed and we were unable to recover it. 00:27:16.074 [2024-11-20 07:23:20.438888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.074 [2024-11-20 07:23:20.438919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.074 qpair failed and we were unable to recover it. 00:27:16.074 [2024-11-20 07:23:20.439171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.074 [2024-11-20 07:23:20.439259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.074 qpair failed and we were unable to recover it. 00:27:16.074 [2024-11-20 07:23:20.439534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.074 [2024-11-20 07:23:20.439610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.074 qpair failed and we were unable to recover it. 00:27:16.074 [2024-11-20 07:23:20.439862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.074 [2024-11-20 07:23:20.439899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.074 qpair failed and we were unable to recover it. 00:27:16.074 [2024-11-20 07:23:20.440161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.074 [2024-11-20 07:23:20.440197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.074 qpair failed and we were unable to recover it. 00:27:16.074 [2024-11-20 07:23:20.440356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.074 [2024-11-20 07:23:20.440388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.074 qpair failed and we were unable to recover it. 00:27:16.074 [2024-11-20 07:23:20.440519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.074 [2024-11-20 07:23:20.440551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.074 qpair failed and we were unable to recover it. 00:27:16.074 [2024-11-20 07:23:20.440725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1029af0 is same with the state(6) to be set 00:27:16.074 [2024-11-20 07:23:20.441099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.074 [2024-11-20 07:23:20.441175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.074 qpair failed and we were unable to recover it. 00:27:16.074 [2024-11-20 07:23:20.441419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.074 [2024-11-20 07:23:20.441458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.074 qpair failed and we were unable to recover it. 00:27:16.074 [2024-11-20 07:23:20.441663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.074 [2024-11-20 07:23:20.441697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.074 qpair failed and we were unable to recover it. 00:27:16.074 [2024-11-20 07:23:20.441897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.074 [2024-11-20 07:23:20.441930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.074 qpair failed and we were unable to recover it. 00:27:16.074 [2024-11-20 07:23:20.442107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.074 [2024-11-20 07:23:20.442142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.074 qpair failed and we were unable to recover it. 00:27:16.074 [2024-11-20 07:23:20.442298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.074 [2024-11-20 07:23:20.442331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.074 qpair failed and we were unable to recover it. 00:27:16.074 [2024-11-20 07:23:20.442586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.074 [2024-11-20 07:23:20.442621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.074 qpair failed and we were unable to recover it. 00:27:16.074 [2024-11-20 07:23:20.442903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.074 [2024-11-20 07:23:20.442936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.074 qpair failed and we were unable to recover it. 00:27:16.074 [2024-11-20 07:23:20.443101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.074 [2024-11-20 07:23:20.443134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.074 qpair failed and we were unable to recover it. 00:27:16.074 [2024-11-20 07:23:20.443271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.074 [2024-11-20 07:23:20.443305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.074 qpair failed and we were unable to recover it. 00:27:16.074 [2024-11-20 07:23:20.443444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.074 [2024-11-20 07:23:20.443476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.074 qpair failed and we were unable to recover it. 00:27:16.074 [2024-11-20 07:23:20.443760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.074 [2024-11-20 07:23:20.443795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.074 qpair failed and we were unable to recover it. 00:27:16.074 [2024-11-20 07:23:20.443917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.074 [2024-11-20 07:23:20.443960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.074 qpair failed and we were unable to recover it. 00:27:16.074 [2024-11-20 07:23:20.444103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.074 [2024-11-20 07:23:20.444135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.074 qpair failed and we were unable to recover it. 00:27:16.074 [2024-11-20 07:23:20.444268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.074 [2024-11-20 07:23:20.444300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.074 qpair failed and we were unable to recover it. 00:27:16.074 [2024-11-20 07:23:20.444447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.074 [2024-11-20 07:23:20.444481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.074 qpair failed and we were unable to recover it. 00:27:16.074 [2024-11-20 07:23:20.444767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.074 [2024-11-20 07:23:20.444800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.075 qpair failed and we were unable to recover it. 00:27:16.075 [2024-11-20 07:23:20.445004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.075 [2024-11-20 07:23:20.445039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.075 qpair failed and we were unable to recover it. 00:27:16.075 [2024-11-20 07:23:20.445169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.075 [2024-11-20 07:23:20.445201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.075 qpair failed and we were unable to recover it. 00:27:16.075 [2024-11-20 07:23:20.445386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.075 [2024-11-20 07:23:20.445420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.075 qpair failed and we were unable to recover it. 00:27:16.075 [2024-11-20 07:23:20.445578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.075 [2024-11-20 07:23:20.445609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.075 qpair failed and we were unable to recover it. 00:27:16.075 [2024-11-20 07:23:20.445868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.075 [2024-11-20 07:23:20.445902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.075 qpair failed and we were unable to recover it. 00:27:16.075 [2024-11-20 07:23:20.446098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.075 [2024-11-20 07:23:20.446133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.075 qpair failed and we were unable to recover it. 00:27:16.075 [2024-11-20 07:23:20.446256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.075 [2024-11-20 07:23:20.446290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.075 qpair failed and we were unable to recover it. 00:27:16.075 [2024-11-20 07:23:20.446501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.075 [2024-11-20 07:23:20.446532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.075 qpair failed and we were unable to recover it. 00:27:16.075 [2024-11-20 07:23:20.446756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.075 [2024-11-20 07:23:20.446789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.075 qpair failed and we were unable to recover it. 00:27:16.075 [2024-11-20 07:23:20.447114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.075 [2024-11-20 07:23:20.447153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.075 qpair failed and we were unable to recover it. 00:27:16.075 [2024-11-20 07:23:20.447402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.075 [2024-11-20 07:23:20.447434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.075 qpair failed and we were unable to recover it. 00:27:16.075 [2024-11-20 07:23:20.447633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.075 [2024-11-20 07:23:20.447665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.075 qpair failed and we were unable to recover it. 00:27:16.075 [2024-11-20 07:23:20.447937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.075 [2024-11-20 07:23:20.447981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.075 qpair failed and we were unable to recover it. 00:27:16.075 [2024-11-20 07:23:20.448223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.075 [2024-11-20 07:23:20.448255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.075 qpair failed and we were unable to recover it. 00:27:16.075 [2024-11-20 07:23:20.448398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.075 [2024-11-20 07:23:20.448431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.075 qpair failed and we were unable to recover it. 00:27:16.075 [2024-11-20 07:23:20.448638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.075 [2024-11-20 07:23:20.448670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.075 qpair failed and we were unable to recover it. 00:27:16.075 [2024-11-20 07:23:20.448809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.075 [2024-11-20 07:23:20.448842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.075 qpair failed and we were unable to recover it. 00:27:16.075 [2024-11-20 07:23:20.449089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.075 [2024-11-20 07:23:20.449130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.075 qpair failed and we were unable to recover it. 00:27:16.075 [2024-11-20 07:23:20.449282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.075 [2024-11-20 07:23:20.449314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.075 qpair failed and we were unable to recover it. 00:27:16.075 [2024-11-20 07:23:20.449585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.075 [2024-11-20 07:23:20.449618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.075 qpair failed and we were unable to recover it. 00:27:16.075 [2024-11-20 07:23:20.449864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.075 [2024-11-20 07:23:20.449897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.075 qpair failed and we were unable to recover it. 00:27:16.075 [2024-11-20 07:23:20.450148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.075 [2024-11-20 07:23:20.450181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.075 qpair failed and we were unable to recover it. 00:27:16.075 [2024-11-20 07:23:20.450399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.075 [2024-11-20 07:23:20.450433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.075 qpair failed and we were unable to recover it. 00:27:16.075 [2024-11-20 07:23:20.450683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.075 [2024-11-20 07:23:20.450717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.075 qpair failed and we were unable to recover it. 00:27:16.075 [2024-11-20 07:23:20.450970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.075 [2024-11-20 07:23:20.451004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.075 qpair failed and we were unable to recover it. 00:27:16.075 [2024-11-20 07:23:20.451144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.075 [2024-11-20 07:23:20.451175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.075 qpair failed and we were unable to recover it. 00:27:16.075 [2024-11-20 07:23:20.451365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.075 [2024-11-20 07:23:20.451398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.075 qpair failed and we were unable to recover it. 00:27:16.075 [2024-11-20 07:23:20.451654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.075 [2024-11-20 07:23:20.451686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.075 qpair failed and we were unable to recover it. 00:27:16.075 [2024-11-20 07:23:20.451936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.075 [2024-11-20 07:23:20.451982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.075 qpair failed and we were unable to recover it. 00:27:16.075 [2024-11-20 07:23:20.452235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.075 [2024-11-20 07:23:20.452269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.075 qpair failed and we were unable to recover it. 00:27:16.075 [2024-11-20 07:23:20.452464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.075 [2024-11-20 07:23:20.452498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.075 qpair failed and we were unable to recover it. 00:27:16.075 [2024-11-20 07:23:20.452636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.076 [2024-11-20 07:23:20.452672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.076 qpair failed and we were unable to recover it. 00:27:16.076 [2024-11-20 07:23:20.452943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.076 [2024-11-20 07:23:20.452993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.076 qpair failed and we were unable to recover it. 00:27:16.076 [2024-11-20 07:23:20.453139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.076 [2024-11-20 07:23:20.453172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.076 qpair failed and we were unable to recover it. 00:27:16.076 [2024-11-20 07:23:20.453359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.076 [2024-11-20 07:23:20.453393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.076 qpair failed and we were unable to recover it. 00:27:16.076 [2024-11-20 07:23:20.453541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.076 [2024-11-20 07:23:20.453575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.076 qpair failed and we were unable to recover it. 00:27:16.076 [2024-11-20 07:23:20.453784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.076 [2024-11-20 07:23:20.453817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.076 qpair failed and we were unable to recover it. 00:27:16.076 [2024-11-20 07:23:20.454111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.076 [2024-11-20 07:23:20.454147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.076 qpair failed and we were unable to recover it. 00:27:16.076 [2024-11-20 07:23:20.454345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.076 [2024-11-20 07:23:20.454378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.076 qpair failed and we were unable to recover it. 00:27:16.076 [2024-11-20 07:23:20.454569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.076 [2024-11-20 07:23:20.454603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.076 qpair failed and we were unable to recover it. 00:27:16.076 [2024-11-20 07:23:20.454900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.076 [2024-11-20 07:23:20.454933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.076 qpair failed and we were unable to recover it. 00:27:16.076 [2024-11-20 07:23:20.455221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.076 [2024-11-20 07:23:20.455254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.076 qpair failed and we were unable to recover it. 00:27:16.076 [2024-11-20 07:23:20.455460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.076 [2024-11-20 07:23:20.455495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.076 qpair failed and we were unable to recover it. 00:27:16.076 [2024-11-20 07:23:20.455759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.076 [2024-11-20 07:23:20.455794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.076 qpair failed and we were unable to recover it. 00:27:16.076 [2024-11-20 07:23:20.455997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.076 [2024-11-20 07:23:20.456033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.076 qpair failed and we were unable to recover it. 00:27:16.076 [2024-11-20 07:23:20.456195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.076 [2024-11-20 07:23:20.456228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.076 qpair failed and we were unable to recover it. 00:27:16.076 [2024-11-20 07:23:20.456363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.076 [2024-11-20 07:23:20.456396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.076 qpair failed and we were unable to recover it. 00:27:16.076 [2024-11-20 07:23:20.456543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.076 [2024-11-20 07:23:20.456576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.076 qpair failed and we were unable to recover it. 00:27:16.076 [2024-11-20 07:23:20.456834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.076 [2024-11-20 07:23:20.456866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.076 qpair failed and we were unable to recover it. 00:27:16.076 [2024-11-20 07:23:20.457067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.076 [2024-11-20 07:23:20.457108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.076 qpair failed and we were unable to recover it. 00:27:16.076 [2024-11-20 07:23:20.457291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.076 [2024-11-20 07:23:20.457325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.076 qpair failed and we were unable to recover it. 00:27:16.076 [2024-11-20 07:23:20.457608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.076 [2024-11-20 07:23:20.457643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.076 qpair failed and we were unable to recover it. 00:27:16.076 [2024-11-20 07:23:20.457834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.076 [2024-11-20 07:23:20.457868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.076 qpair failed and we were unable to recover it. 00:27:16.076 [2024-11-20 07:23:20.458054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.076 [2024-11-20 07:23:20.458089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.076 qpair failed and we were unable to recover it. 00:27:16.076 [2024-11-20 07:23:20.458237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.076 [2024-11-20 07:23:20.458269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.076 qpair failed and we were unable to recover it. 00:27:16.076 [2024-11-20 07:23:20.458476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.076 [2024-11-20 07:23:20.458509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.076 qpair failed and we were unable to recover it. 00:27:16.076 [2024-11-20 07:23:20.458708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.076 [2024-11-20 07:23:20.458740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.076 qpair failed and we were unable to recover it. 00:27:16.076 [2024-11-20 07:23:20.459036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.076 [2024-11-20 07:23:20.459071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.076 qpair failed and we were unable to recover it. 00:27:16.076 [2024-11-20 07:23:20.459217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.076 [2024-11-20 07:23:20.459249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.076 qpair failed and we were unable to recover it. 00:27:16.076 [2024-11-20 07:23:20.459449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.076 [2024-11-20 07:23:20.459482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.076 qpair failed and we were unable to recover it. 00:27:16.076 [2024-11-20 07:23:20.459725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.076 [2024-11-20 07:23:20.459759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.076 qpair failed and we were unable to recover it. 00:27:16.076 [2024-11-20 07:23:20.459879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.076 [2024-11-20 07:23:20.459913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.076 qpair failed and we were unable to recover it. 00:27:16.076 [2024-11-20 07:23:20.460092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.077 [2024-11-20 07:23:20.460128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.077 qpair failed and we were unable to recover it. 00:27:16.077 [2024-11-20 07:23:20.460337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.077 [2024-11-20 07:23:20.460371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.077 qpair failed and we were unable to recover it. 00:27:16.077 [2024-11-20 07:23:20.460582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.077 [2024-11-20 07:23:20.460615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.077 qpair failed and we were unable to recover it. 00:27:16.077 [2024-11-20 07:23:20.460891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.077 [2024-11-20 07:23:20.460923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.077 qpair failed and we were unable to recover it. 00:27:16.077 [2024-11-20 07:23:20.461063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.077 [2024-11-20 07:23:20.461097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.077 qpair failed and we were unable to recover it. 00:27:16.077 [2024-11-20 07:23:20.461241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.077 [2024-11-20 07:23:20.461273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.077 qpair failed and we were unable to recover it. 00:27:16.077 [2024-11-20 07:23:20.461417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.077 [2024-11-20 07:23:20.461449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.077 qpair failed and we were unable to recover it. 00:27:16.077 [2024-11-20 07:23:20.461681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.077 [2024-11-20 07:23:20.461714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.077 qpair failed and we were unable to recover it. 00:27:16.077 [2024-11-20 07:23:20.462011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.077 [2024-11-20 07:23:20.462046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.077 qpair failed and we were unable to recover it. 00:27:16.077 [2024-11-20 07:23:20.462252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.077 [2024-11-20 07:23:20.462285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.077 qpair failed and we were unable to recover it. 00:27:16.077 [2024-11-20 07:23:20.462416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.077 [2024-11-20 07:23:20.462448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.077 qpair failed and we were unable to recover it. 00:27:16.077 [2024-11-20 07:23:20.462795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.077 [2024-11-20 07:23:20.462832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.077 qpair failed and we were unable to recover it. 00:27:16.077 [2024-11-20 07:23:20.462964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.077 [2024-11-20 07:23:20.462998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.077 qpair failed and we were unable to recover it. 00:27:16.077 [2024-11-20 07:23:20.463223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.077 [2024-11-20 07:23:20.463256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.077 qpair failed and we were unable to recover it. 00:27:16.077 [2024-11-20 07:23:20.463521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.077 [2024-11-20 07:23:20.463554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.077 qpair failed and we were unable to recover it. 00:27:16.077 [2024-11-20 07:23:20.463821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.077 [2024-11-20 07:23:20.463856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.077 qpair failed and we were unable to recover it. 00:27:16.077 [2024-11-20 07:23:20.464083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.077 [2024-11-20 07:23:20.464118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.077 qpair failed and we were unable to recover it. 00:27:16.077 [2024-11-20 07:23:20.464309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.077 [2024-11-20 07:23:20.464341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.077 qpair failed and we were unable to recover it. 00:27:16.077 [2024-11-20 07:23:20.464559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.077 [2024-11-20 07:23:20.464592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.077 qpair failed and we were unable to recover it. 00:27:16.077 [2024-11-20 07:23:20.464791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.077 [2024-11-20 07:23:20.464824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.077 qpair failed and we were unable to recover it. 00:27:16.077 [2024-11-20 07:23:20.465085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.077 [2024-11-20 07:23:20.465118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.077 qpair failed and we were unable to recover it. 00:27:16.077 [2024-11-20 07:23:20.465309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.077 [2024-11-20 07:23:20.465342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.077 qpair failed and we were unable to recover it. 00:27:16.077 [2024-11-20 07:23:20.465475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.077 [2024-11-20 07:23:20.465508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.077 qpair failed and we were unable to recover it. 00:27:16.077 [2024-11-20 07:23:20.465781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.077 [2024-11-20 07:23:20.465813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.077 qpair failed and we were unable to recover it. 00:27:16.077 [2024-11-20 07:23:20.466018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.077 [2024-11-20 07:23:20.466053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.077 qpair failed and we were unable to recover it. 00:27:16.077 [2024-11-20 07:23:20.466177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.077 [2024-11-20 07:23:20.466210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.077 qpair failed and we were unable to recover it. 00:27:16.077 [2024-11-20 07:23:20.466485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.077 [2024-11-20 07:23:20.466518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.077 qpair failed and we were unable to recover it. 00:27:16.077 [2024-11-20 07:23:20.466733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.078 [2024-11-20 07:23:20.466772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.078 qpair failed and we were unable to recover it. 00:27:16.078 [2024-11-20 07:23:20.466977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.078 [2024-11-20 07:23:20.467012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.078 qpair failed and we were unable to recover it. 00:27:16.078 [2024-11-20 07:23:20.467217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.078 [2024-11-20 07:23:20.467251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.078 qpair failed and we were unable to recover it. 00:27:16.078 [2024-11-20 07:23:20.467445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.078 [2024-11-20 07:23:20.467478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.078 qpair failed and we were unable to recover it. 00:27:16.078 [2024-11-20 07:23:20.467672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.078 [2024-11-20 07:23:20.467704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.078 qpair failed and we were unable to recover it. 00:27:16.078 [2024-11-20 07:23:20.467971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.078 [2024-11-20 07:23:20.468004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.078 qpair failed and we were unable to recover it. 00:27:16.078 [2024-11-20 07:23:20.468142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.078 [2024-11-20 07:23:20.468174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.078 qpair failed and we were unable to recover it. 00:27:16.078 [2024-11-20 07:23:20.468380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.078 [2024-11-20 07:23:20.468411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.078 qpair failed and we were unable to recover it. 00:27:16.078 [2024-11-20 07:23:20.468614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.078 [2024-11-20 07:23:20.468646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.078 qpair failed and we were unable to recover it. 00:27:16.078 [2024-11-20 07:23:20.468851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.078 [2024-11-20 07:23:20.468885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.078 qpair failed and we were unable to recover it. 00:27:16.078 [2024-11-20 07:23:20.469180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.078 [2024-11-20 07:23:20.469216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.078 qpair failed and we were unable to recover it. 00:27:16.078 [2024-11-20 07:23:20.469490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.078 [2024-11-20 07:23:20.469523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.078 qpair failed and we were unable to recover it. 00:27:16.078 [2024-11-20 07:23:20.469752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.078 [2024-11-20 07:23:20.469785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.078 qpair failed and we were unable to recover it. 00:27:16.078 [2024-11-20 07:23:20.469940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.078 [2024-11-20 07:23:20.469988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.078 qpair failed and we were unable to recover it. 00:27:16.078 [2024-11-20 07:23:20.470267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.078 [2024-11-20 07:23:20.470300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.078 qpair failed and we were unable to recover it. 00:27:16.078 [2024-11-20 07:23:20.470610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.078 [2024-11-20 07:23:20.470642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.078 qpair failed and we were unable to recover it. 00:27:16.078 [2024-11-20 07:23:20.470920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.078 [2024-11-20 07:23:20.470968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.078 qpair failed and we were unable to recover it. 00:27:16.078 [2024-11-20 07:23:20.471179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.078 [2024-11-20 07:23:20.471212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.078 qpair failed and we were unable to recover it. 00:27:16.078 [2024-11-20 07:23:20.471350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.078 [2024-11-20 07:23:20.471383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.078 qpair failed and we were unable to recover it. 00:27:16.078 [2024-11-20 07:23:20.471640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.078 [2024-11-20 07:23:20.471674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.078 qpair failed and we were unable to recover it. 00:27:16.078 [2024-11-20 07:23:20.471939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.078 [2024-11-20 07:23:20.471986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.078 qpair failed and we were unable to recover it. 00:27:16.078 [2024-11-20 07:23:20.472138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.078 [2024-11-20 07:23:20.472171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.078 qpair failed and we were unable to recover it. 00:27:16.078 [2024-11-20 07:23:20.472284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.078 [2024-11-20 07:23:20.472315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.078 qpair failed and we were unable to recover it. 00:27:16.078 [2024-11-20 07:23:20.472570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.078 [2024-11-20 07:23:20.472603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.078 qpair failed and we were unable to recover it. 00:27:16.078 [2024-11-20 07:23:20.472781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.078 [2024-11-20 07:23:20.472817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.078 qpair failed and we were unable to recover it. 00:27:16.078 [2024-11-20 07:23:20.473084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.078 [2024-11-20 07:23:20.473120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.078 qpair failed and we were unable to recover it. 00:27:16.078 [2024-11-20 07:23:20.473376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.078 [2024-11-20 07:23:20.473408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.078 qpair failed and we were unable to recover it. 00:27:16.078 [2024-11-20 07:23:20.473723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.078 [2024-11-20 07:23:20.473756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.078 qpair failed and we were unable to recover it. 00:27:16.078 [2024-11-20 07:23:20.473944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.078 [2024-11-20 07:23:20.474006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.078 qpair failed and we were unable to recover it. 00:27:16.078 [2024-11-20 07:23:20.474152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.078 [2024-11-20 07:23:20.474185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.078 qpair failed and we were unable to recover it. 00:27:16.078 [2024-11-20 07:23:20.474382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.078 [2024-11-20 07:23:20.474415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.078 qpair failed and we were unable to recover it. 00:27:16.078 [2024-11-20 07:23:20.474705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.079 [2024-11-20 07:23:20.474737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.079 qpair failed and we were unable to recover it. 00:27:16.079 [2024-11-20 07:23:20.474928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.079 [2024-11-20 07:23:20.474972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.079 qpair failed and we were unable to recover it. 00:27:16.079 [2024-11-20 07:23:20.475275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.079 [2024-11-20 07:23:20.475309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.079 qpair failed and we were unable to recover it. 00:27:16.079 [2024-11-20 07:23:20.475534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.079 [2024-11-20 07:23:20.475566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.079 qpair failed and we were unable to recover it. 00:27:16.079 [2024-11-20 07:23:20.475827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.079 [2024-11-20 07:23:20.475860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.079 qpair failed and we were unable to recover it. 00:27:16.079 [2024-11-20 07:23:20.476164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.079 [2024-11-20 07:23:20.476198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.079 qpair failed and we were unable to recover it. 00:27:16.079 [2024-11-20 07:23:20.476403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.079 [2024-11-20 07:23:20.476436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.079 qpair failed and we were unable to recover it. 00:27:16.079 [2024-11-20 07:23:20.476655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.079 [2024-11-20 07:23:20.476688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.079 qpair failed and we were unable to recover it. 00:27:16.079 [2024-11-20 07:23:20.476976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.079 [2024-11-20 07:23:20.477011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.079 qpair failed and we were unable to recover it. 00:27:16.079 [2024-11-20 07:23:20.477302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.079 [2024-11-20 07:23:20.477341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.079 qpair failed and we were unable to recover it. 00:27:16.079 [2024-11-20 07:23:20.477568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.079 [2024-11-20 07:23:20.477601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.079 qpair failed and we were unable to recover it. 00:27:16.079 [2024-11-20 07:23:20.477904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.079 [2024-11-20 07:23:20.477938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.079 qpair failed and we were unable to recover it. 00:27:16.079 [2024-11-20 07:23:20.478157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.079 [2024-11-20 07:23:20.478191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.079 qpair failed and we were unable to recover it. 00:27:16.079 [2024-11-20 07:23:20.478392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.079 [2024-11-20 07:23:20.478426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.079 qpair failed and we were unable to recover it. 00:27:16.079 [2024-11-20 07:23:20.478702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.079 [2024-11-20 07:23:20.478736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.079 qpair failed and we were unable to recover it. 00:27:16.079 [2024-11-20 07:23:20.478992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.079 [2024-11-20 07:23:20.479028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.079 qpair failed and we were unable to recover it. 00:27:16.079 [2024-11-20 07:23:20.479232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.079 [2024-11-20 07:23:20.479265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.079 qpair failed and we were unable to recover it. 00:27:16.079 [2024-11-20 07:23:20.479416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.079 [2024-11-20 07:23:20.479450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.079 qpair failed and we were unable to recover it. 00:27:16.079 [2024-11-20 07:23:20.479669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.079 [2024-11-20 07:23:20.479701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.079 qpair failed and we were unable to recover it. 00:27:16.079 [2024-11-20 07:23:20.479980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.079 [2024-11-20 07:23:20.480014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.079 qpair failed and we were unable to recover it. 00:27:16.079 [2024-11-20 07:23:20.480226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.079 [2024-11-20 07:23:20.480259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.079 qpair failed and we were unable to recover it. 00:27:16.079 [2024-11-20 07:23:20.480514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.079 [2024-11-20 07:23:20.480548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.079 qpair failed and we were unable to recover it. 00:27:16.079 [2024-11-20 07:23:20.480825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.079 [2024-11-20 07:23:20.480858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.079 qpair failed and we were unable to recover it. 00:27:16.079 [2024-11-20 07:23:20.481120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.079 [2024-11-20 07:23:20.481155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.079 qpair failed and we were unable to recover it. 00:27:16.079 [2024-11-20 07:23:20.481454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.079 [2024-11-20 07:23:20.481488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.079 qpair failed and we were unable to recover it. 00:27:16.079 [2024-11-20 07:23:20.481705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.079 [2024-11-20 07:23:20.481738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.079 qpair failed and we were unable to recover it. 00:27:16.079 [2024-11-20 07:23:20.482063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.079 [2024-11-20 07:23:20.482099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.079 qpair failed and we were unable to recover it. 00:27:16.079 [2024-11-20 07:23:20.482264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.079 [2024-11-20 07:23:20.482297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.079 qpair failed and we were unable to recover it. 00:27:16.079 [2024-11-20 07:23:20.482582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.079 [2024-11-20 07:23:20.482614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.079 qpair failed and we were unable to recover it. 00:27:16.079 [2024-11-20 07:23:20.482897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.079 [2024-11-20 07:23:20.482934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.079 qpair failed and we were unable to recover it. 00:27:16.079 [2024-11-20 07:23:20.483216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.079 [2024-11-20 07:23:20.483250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.079 qpair failed and we were unable to recover it. 00:27:16.079 [2024-11-20 07:23:20.483526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.079 [2024-11-20 07:23:20.483558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.080 qpair failed and we were unable to recover it. 00:27:16.080 [2024-11-20 07:23:20.483761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.080 [2024-11-20 07:23:20.483795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.080 qpair failed and we were unable to recover it. 00:27:16.080 [2024-11-20 07:23:20.484055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.080 [2024-11-20 07:23:20.484090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.080 qpair failed and we were unable to recover it. 00:27:16.080 [2024-11-20 07:23:20.484242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.080 [2024-11-20 07:23:20.484276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.080 qpair failed and we were unable to recover it. 00:27:16.080 [2024-11-20 07:23:20.484458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.080 [2024-11-20 07:23:20.484491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.080 qpair failed and we were unable to recover it. 00:27:16.080 [2024-11-20 07:23:20.484717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.080 [2024-11-20 07:23:20.484749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.080 qpair failed and we were unable to recover it. 00:27:16.080 [2024-11-20 07:23:20.484902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.080 [2024-11-20 07:23:20.484934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.080 qpair failed and we were unable to recover it. 00:27:16.080 [2024-11-20 07:23:20.485245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.080 [2024-11-20 07:23:20.485280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.080 qpair failed and we were unable to recover it. 00:27:16.080 [2024-11-20 07:23:20.485513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.080 [2024-11-20 07:23:20.485548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.080 qpair failed and we were unable to recover it. 00:27:16.080 [2024-11-20 07:23:20.485821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.080 [2024-11-20 07:23:20.485854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.080 qpair failed and we were unable to recover it. 00:27:16.080 [2024-11-20 07:23:20.486036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.080 [2024-11-20 07:23:20.486073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.080 qpair failed and we were unable to recover it. 00:27:16.080 [2024-11-20 07:23:20.486192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.080 [2024-11-20 07:23:20.486224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.080 qpair failed and we were unable to recover it. 00:27:16.080 [2024-11-20 07:23:20.486365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.080 [2024-11-20 07:23:20.486397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.080 qpair failed and we were unable to recover it. 00:27:16.080 [2024-11-20 07:23:20.486597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.080 [2024-11-20 07:23:20.486629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.080 qpair failed and we were unable to recover it. 00:27:16.080 [2024-11-20 07:23:20.486882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.080 [2024-11-20 07:23:20.486914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.080 qpair failed and we were unable to recover it. 00:27:16.080 [2024-11-20 07:23:20.487251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.080 [2024-11-20 07:23:20.487287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.080 qpair failed and we were unable to recover it. 00:27:16.080 [2024-11-20 07:23:20.487593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.080 [2024-11-20 07:23:20.487626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.080 qpair failed and we were unable to recover it. 00:27:16.080 [2024-11-20 07:23:20.487891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.080 [2024-11-20 07:23:20.487924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.080 qpair failed and we were unable to recover it. 00:27:16.080 [2024-11-20 07:23:20.488132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.080 [2024-11-20 07:23:20.488172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.080 qpair failed and we were unable to recover it. 00:27:16.080 [2024-11-20 07:23:20.488437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.080 [2024-11-20 07:23:20.488470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.080 qpair failed and we were unable to recover it. 00:27:16.080 [2024-11-20 07:23:20.488661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.080 [2024-11-20 07:23:20.488693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.080 qpair failed and we were unable to recover it. 00:27:16.080 [2024-11-20 07:23:20.488956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.080 [2024-11-20 07:23:20.488991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.080 qpair failed and we were unable to recover it. 00:27:16.080 [2024-11-20 07:23:20.489276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.080 [2024-11-20 07:23:20.489311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.080 qpair failed and we were unable to recover it. 00:27:16.080 [2024-11-20 07:23:20.489582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.080 [2024-11-20 07:23:20.489614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.080 qpair failed and we were unable to recover it. 00:27:16.080 [2024-11-20 07:23:20.489902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.080 [2024-11-20 07:23:20.489934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.080 qpair failed and we were unable to recover it. 00:27:16.080 [2024-11-20 07:23:20.490215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.080 [2024-11-20 07:23:20.490250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.080 qpair failed and we were unable to recover it. 00:27:16.080 [2024-11-20 07:23:20.490464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.080 [2024-11-20 07:23:20.490496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.080 qpair failed and we were unable to recover it. 00:27:16.080 [2024-11-20 07:23:20.490758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.080 [2024-11-20 07:23:20.490791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.080 qpair failed and we were unable to recover it. 00:27:16.080 [2024-11-20 07:23:20.490989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.080 [2024-11-20 07:23:20.491025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.080 qpair failed and we were unable to recover it. 00:27:16.080 [2024-11-20 07:23:20.491320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.080 [2024-11-20 07:23:20.491352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.080 qpair failed and we were unable to recover it. 00:27:16.080 [2024-11-20 07:23:20.491620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.080 [2024-11-20 07:23:20.491653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.080 qpair failed and we were unable to recover it. 00:27:16.081 [2024-11-20 07:23:20.491957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.081 [2024-11-20 07:23:20.491994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.081 qpair failed and we were unable to recover it. 00:27:16.081 [2024-11-20 07:23:20.492271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.081 [2024-11-20 07:23:20.492305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.081 qpair failed and we were unable to recover it. 00:27:16.081 [2024-11-20 07:23:20.492562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.081 [2024-11-20 07:23:20.492597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.081 qpair failed and we were unable to recover it. 00:27:16.081 [2024-11-20 07:23:20.492929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.081 [2024-11-20 07:23:20.492973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.081 qpair failed and we were unable to recover it. 00:27:16.081 [2024-11-20 07:23:20.493227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.081 [2024-11-20 07:23:20.493261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.081 qpair failed and we were unable to recover it. 00:27:16.081 [2024-11-20 07:23:20.493479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.081 [2024-11-20 07:23:20.493512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.081 qpair failed and we were unable to recover it. 00:27:16.081 [2024-11-20 07:23:20.493731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.081 [2024-11-20 07:23:20.493764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.081 qpair failed and we were unable to recover it. 00:27:16.081 [2024-11-20 07:23:20.493945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.081 [2024-11-20 07:23:20.493988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.081 qpair failed and we were unable to recover it. 00:27:16.081 [2024-11-20 07:23:20.494192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.081 [2024-11-20 07:23:20.494226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.081 qpair failed and we were unable to recover it. 00:27:16.081 [2024-11-20 07:23:20.494550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.081 [2024-11-20 07:23:20.494584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.081 qpair failed and we were unable to recover it. 00:27:16.081 [2024-11-20 07:23:20.494726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.081 [2024-11-20 07:23:20.494761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.081 qpair failed and we were unable to recover it. 00:27:16.081 [2024-11-20 07:23:20.495015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.081 [2024-11-20 07:23:20.495052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.081 qpair failed and we were unable to recover it. 00:27:16.081 [2024-11-20 07:23:20.495349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.081 [2024-11-20 07:23:20.495382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.081 qpair failed and we were unable to recover it. 00:27:16.081 [2024-11-20 07:23:20.495644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.081 [2024-11-20 07:23:20.495679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.081 qpair failed and we were unable to recover it. 00:27:16.081 [2024-11-20 07:23:20.495836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.081 [2024-11-20 07:23:20.495869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.081 qpair failed and we were unable to recover it. 00:27:16.081 [2024-11-20 07:23:20.496060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.081 [2024-11-20 07:23:20.496095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.081 qpair failed and we were unable to recover it. 00:27:16.081 [2024-11-20 07:23:20.496348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.081 [2024-11-20 07:23:20.496456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.081 qpair failed and we were unable to recover it. 00:27:16.081 [2024-11-20 07:23:20.496673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.081 [2024-11-20 07:23:20.496708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.081 qpair failed and we were unable to recover it. 00:27:16.081 [2024-11-20 07:23:20.496924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.081 [2024-11-20 07:23:20.496975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.081 qpair failed and we were unable to recover it. 00:27:16.081 [2024-11-20 07:23:20.497183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.081 [2024-11-20 07:23:20.497217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.081 qpair failed and we were unable to recover it. 00:27:16.081 [2024-11-20 07:23:20.497401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.081 [2024-11-20 07:23:20.497433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.081 qpair failed and we were unable to recover it. 00:27:16.081 [2024-11-20 07:23:20.497643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.081 [2024-11-20 07:23:20.497678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.081 qpair failed and we were unable to recover it. 00:27:16.081 [2024-11-20 07:23:20.497968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.081 [2024-11-20 07:23:20.498001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.081 qpair failed and we were unable to recover it. 00:27:16.081 [2024-11-20 07:23:20.498276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.081 [2024-11-20 07:23:20.498308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.081 qpair failed and we were unable to recover it. 00:27:16.081 [2024-11-20 07:23:20.498455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.081 [2024-11-20 07:23:20.498488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.081 qpair failed and we were unable to recover it. 00:27:16.081 [2024-11-20 07:23:20.498684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.081 [2024-11-20 07:23:20.498717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.081 qpair failed and we were unable to recover it. 00:27:16.081 [2024-11-20 07:23:20.499022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.081 [2024-11-20 07:23:20.499057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.081 qpair failed and we were unable to recover it. 00:27:16.081 [2024-11-20 07:23:20.499203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.081 [2024-11-20 07:23:20.499244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.081 qpair failed and we were unable to recover it. 00:27:16.081 [2024-11-20 07:23:20.499398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.081 [2024-11-20 07:23:20.499429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.081 qpair failed and we were unable to recover it. 00:27:16.081 [2024-11-20 07:23:20.499633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.081 [2024-11-20 07:23:20.499665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.081 qpair failed and we were unable to recover it. 00:27:16.082 [2024-11-20 07:23:20.499890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.082 [2024-11-20 07:23:20.499923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.082 qpair failed and we were unable to recover it. 00:27:16.082 [2024-11-20 07:23:20.500139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.082 [2024-11-20 07:23:20.500172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.082 qpair failed and we were unable to recover it. 00:27:16.082 [2024-11-20 07:23:20.500358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.082 [2024-11-20 07:23:20.500391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.082 qpair failed and we were unable to recover it. 00:27:16.082 [2024-11-20 07:23:20.500645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.082 [2024-11-20 07:23:20.500680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.082 qpair failed and we were unable to recover it. 00:27:16.082 [2024-11-20 07:23:20.500897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.082 [2024-11-20 07:23:20.500931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.082 qpair failed and we were unable to recover it. 00:27:16.082 [2024-11-20 07:23:20.501081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.082 [2024-11-20 07:23:20.501113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.082 qpair failed and we were unable to recover it. 00:27:16.082 [2024-11-20 07:23:20.501388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.082 [2024-11-20 07:23:20.501424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.082 qpair failed and we were unable to recover it. 00:27:16.082 [2024-11-20 07:23:20.501722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.082 [2024-11-20 07:23:20.501754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.082 qpair failed and we were unable to recover it. 00:27:16.082 [2024-11-20 07:23:20.501940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.082 [2024-11-20 07:23:20.501984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.082 qpair failed and we were unable to recover it. 00:27:16.082 [2024-11-20 07:23:20.502203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.082 [2024-11-20 07:23:20.502237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.082 qpair failed and we were unable to recover it. 00:27:16.082 [2024-11-20 07:23:20.502491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.082 [2024-11-20 07:23:20.502526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.082 qpair failed and we were unable to recover it. 00:27:16.082 [2024-11-20 07:23:20.502750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.082 [2024-11-20 07:23:20.502782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.082 qpair failed and we were unable to recover it. 00:27:16.082 [2024-11-20 07:23:20.503081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.082 [2024-11-20 07:23:20.503118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.082 qpair failed and we were unable to recover it. 00:27:16.082 [2024-11-20 07:23:20.503370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.082 [2024-11-20 07:23:20.503403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.082 qpair failed and we were unable to recover it. 00:27:16.082 [2024-11-20 07:23:20.503545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.082 [2024-11-20 07:23:20.503579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.082 qpair failed and we were unable to recover it. 00:27:16.082 [2024-11-20 07:23:20.503849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.082 [2024-11-20 07:23:20.503880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.082 qpair failed and we were unable to recover it. 00:27:16.082 [2024-11-20 07:23:20.504109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.082 [2024-11-20 07:23:20.504144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.082 qpair failed and we were unable to recover it. 00:27:16.082 [2024-11-20 07:23:20.504332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.082 [2024-11-20 07:23:20.504366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.082 qpair failed and we were unable to recover it. 00:27:16.082 [2024-11-20 07:23:20.504590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.082 [2024-11-20 07:23:20.504623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.082 qpair failed and we were unable to recover it. 00:27:16.082 [2024-11-20 07:23:20.504874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.082 [2024-11-20 07:23:20.504907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.082 qpair failed and we were unable to recover it. 00:27:16.082 [2024-11-20 07:23:20.505200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.082 [2024-11-20 07:23:20.505236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.082 qpair failed and we were unable to recover it. 00:27:16.082 [2024-11-20 07:23:20.505422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.082 [2024-11-20 07:23:20.505454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.082 qpair failed and we were unable to recover it. 00:27:16.082 [2024-11-20 07:23:20.505637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.082 [2024-11-20 07:23:20.505671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.082 qpair failed and we were unable to recover it. 00:27:16.082 [2024-11-20 07:23:20.505883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.082 [2024-11-20 07:23:20.505915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.082 qpair failed and we were unable to recover it. 00:27:16.082 [2024-11-20 07:23:20.506083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.082 [2024-11-20 07:23:20.506117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.082 qpair failed and we were unable to recover it. 00:27:16.082 [2024-11-20 07:23:20.506326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.082 [2024-11-20 07:23:20.506360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.082 qpair failed and we were unable to recover it. 00:27:16.082 [2024-11-20 07:23:20.506643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.082 [2024-11-20 07:23:20.506675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.082 qpair failed and we were unable to recover it. 00:27:16.082 [2024-11-20 07:23:20.506926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.083 [2024-11-20 07:23:20.506976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.083 qpair failed and we were unable to recover it. 00:27:16.083 [2024-11-20 07:23:20.507186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.083 [2024-11-20 07:23:20.507217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.083 qpair failed and we were unable to recover it. 00:27:16.083 [2024-11-20 07:23:20.507326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.083 [2024-11-20 07:23:20.507357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.083 qpair failed and we were unable to recover it. 00:27:16.083 [2024-11-20 07:23:20.507489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.083 [2024-11-20 07:23:20.507523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.083 qpair failed and we were unable to recover it. 00:27:16.083 [2024-11-20 07:23:20.507805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.083 [2024-11-20 07:23:20.507836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.083 qpair failed and we were unable to recover it. 00:27:16.083 [2024-11-20 07:23:20.508048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.083 [2024-11-20 07:23:20.508082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.083 qpair failed and we were unable to recover it. 00:27:16.083 [2024-11-20 07:23:20.508239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.083 [2024-11-20 07:23:20.508272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.083 qpair failed and we were unable to recover it. 00:27:16.083 [2024-11-20 07:23:20.508459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.083 [2024-11-20 07:23:20.508491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.083 qpair failed and we were unable to recover it. 00:27:16.083 [2024-11-20 07:23:20.508686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.083 [2024-11-20 07:23:20.508720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.083 qpair failed and we were unable to recover it. 00:27:16.083 [2024-11-20 07:23:20.508912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.083 [2024-11-20 07:23:20.508960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.083 qpair failed and we were unable to recover it. 00:27:16.083 [2024-11-20 07:23:20.509119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.083 [2024-11-20 07:23:20.509158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.083 qpair failed and we were unable to recover it. 00:27:16.083 [2024-11-20 07:23:20.509425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.083 [2024-11-20 07:23:20.509459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.083 qpair failed and we were unable to recover it. 00:27:16.083 [2024-11-20 07:23:20.509681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.083 [2024-11-20 07:23:20.509713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.083 qpair failed and we were unable to recover it. 00:27:16.083 [2024-11-20 07:23:20.509846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.083 [2024-11-20 07:23:20.509879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.083 qpair failed and we were unable to recover it. 00:27:16.083 [2024-11-20 07:23:20.510075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.083 [2024-11-20 07:23:20.510108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.083 qpair failed and we were unable to recover it. 00:27:16.083 [2024-11-20 07:23:20.510301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.083 [2024-11-20 07:23:20.510333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.083 qpair failed and we were unable to recover it. 00:27:16.083 [2024-11-20 07:23:20.510483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.083 [2024-11-20 07:23:20.510513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.083 qpair failed and we were unable to recover it. 00:27:16.083 [2024-11-20 07:23:20.510705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.083 [2024-11-20 07:23:20.510738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.083 qpair failed and we were unable to recover it. 00:27:16.083 [2024-11-20 07:23:20.510967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.083 [2024-11-20 07:23:20.511001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.083 qpair failed and we were unable to recover it. 00:27:16.083 [2024-11-20 07:23:20.511203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.083 [2024-11-20 07:23:20.511236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.083 qpair failed and we were unable to recover it. 00:27:16.083 [2024-11-20 07:23:20.511440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.083 [2024-11-20 07:23:20.511474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.083 qpair failed and we were unable to recover it. 00:27:16.083 [2024-11-20 07:23:20.511672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.083 [2024-11-20 07:23:20.511707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.083 qpair failed and we were unable to recover it. 00:27:16.083 [2024-11-20 07:23:20.511989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.083 [2024-11-20 07:23:20.512023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.083 qpair failed and we were unable to recover it. 00:27:16.083 [2024-11-20 07:23:20.512207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.083 [2024-11-20 07:23:20.512240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.083 qpair failed and we were unable to recover it. 00:27:16.083 [2024-11-20 07:23:20.512502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.083 [2024-11-20 07:23:20.512535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.083 qpair failed and we were unable to recover it. 00:27:16.083 [2024-11-20 07:23:20.512737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.083 [2024-11-20 07:23:20.512770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.083 qpair failed and we were unable to recover it. 00:27:16.083 [2024-11-20 07:23:20.512900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.083 [2024-11-20 07:23:20.512932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.083 qpair failed and we were unable to recover it. 00:27:16.083 [2024-11-20 07:23:20.513087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.083 [2024-11-20 07:23:20.513120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.083 qpair failed and we were unable to recover it. 00:27:16.083 [2024-11-20 07:23:20.513247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.083 [2024-11-20 07:23:20.513278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.083 qpair failed and we were unable to recover it. 00:27:16.084 [2024-11-20 07:23:20.513489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.084 [2024-11-20 07:23:20.513523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.084 qpair failed and we were unable to recover it. 00:27:16.084 [2024-11-20 07:23:20.513718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.084 [2024-11-20 07:23:20.513752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.084 qpair failed and we were unable to recover it. 00:27:16.084 [2024-11-20 07:23:20.513888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.084 [2024-11-20 07:23:20.513921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.084 qpair failed and we were unable to recover it. 00:27:16.084 [2024-11-20 07:23:20.514135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.084 [2024-11-20 07:23:20.514167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.084 qpair failed and we were unable to recover it. 00:27:16.084 [2024-11-20 07:23:20.514373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.084 [2024-11-20 07:23:20.514404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.084 qpair failed and we were unable to recover it. 00:27:16.084 [2024-11-20 07:23:20.514553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.084 [2024-11-20 07:23:20.514584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.084 qpair failed and we were unable to recover it. 00:27:16.084 [2024-11-20 07:23:20.514789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.084 [2024-11-20 07:23:20.514822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.084 qpair failed and we were unable to recover it. 00:27:16.084 [2024-11-20 07:23:20.515020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.084 [2024-11-20 07:23:20.515054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.084 qpair failed and we were unable to recover it. 00:27:16.084 [2024-11-20 07:23:20.515190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.084 [2024-11-20 07:23:20.515223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.084 qpair failed and we were unable to recover it. 00:27:16.084 [2024-11-20 07:23:20.515352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.084 [2024-11-20 07:23:20.515382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.084 qpair failed and we were unable to recover it. 00:27:16.084 [2024-11-20 07:23:20.515504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.084 [2024-11-20 07:23:20.515537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.084 qpair failed and we were unable to recover it. 00:27:16.084 [2024-11-20 07:23:20.515671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.084 [2024-11-20 07:23:20.515703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.084 qpair failed and we were unable to recover it. 00:27:16.084 [2024-11-20 07:23:20.515984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.084 [2024-11-20 07:23:20.516020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.084 qpair failed and we were unable to recover it. 00:27:16.084 [2024-11-20 07:23:20.516211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.084 [2024-11-20 07:23:20.516244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.084 qpair failed and we were unable to recover it. 00:27:16.084 [2024-11-20 07:23:20.516376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.084 [2024-11-20 07:23:20.516408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.084 qpair failed and we were unable to recover it. 00:27:16.084 [2024-11-20 07:23:20.516686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.084 [2024-11-20 07:23:20.516719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.084 qpair failed and we were unable to recover it. 00:27:16.084 [2024-11-20 07:23:20.516919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.084 [2024-11-20 07:23:20.516966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.084 qpair failed and we were unable to recover it. 00:27:16.084 [2024-11-20 07:23:20.517110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.084 [2024-11-20 07:23:20.517143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.084 qpair failed and we were unable to recover it. 00:27:16.084 [2024-11-20 07:23:20.517355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.084 [2024-11-20 07:23:20.517388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.084 qpair failed and we were unable to recover it. 00:27:16.084 [2024-11-20 07:23:20.517712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.084 [2024-11-20 07:23:20.517744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.084 qpair failed and we were unable to recover it. 00:27:16.084 [2024-11-20 07:23:20.518023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.084 [2024-11-20 07:23:20.518058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.084 qpair failed and we were unable to recover it. 00:27:16.084 [2024-11-20 07:23:20.518312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.084 [2024-11-20 07:23:20.518351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.084 qpair failed and we were unable to recover it. 00:27:16.084 [2024-11-20 07:23:20.518702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.084 [2024-11-20 07:23:20.518734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.084 qpair failed and we were unable to recover it. 00:27:16.084 [2024-11-20 07:23:20.518995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.084 [2024-11-20 07:23:20.519029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.084 qpair failed and we were unable to recover it. 00:27:16.084 [2024-11-20 07:23:20.519231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.084 [2024-11-20 07:23:20.519264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.084 qpair failed and we were unable to recover it. 00:27:16.084 [2024-11-20 07:23:20.519455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.084 [2024-11-20 07:23:20.519486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.084 qpair failed and we were unable to recover it. 00:27:16.084 [2024-11-20 07:23:20.519714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.084 [2024-11-20 07:23:20.519747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.084 qpair failed and we were unable to recover it. 00:27:16.084 [2024-11-20 07:23:20.519982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.084 [2024-11-20 07:23:20.520017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.084 qpair failed and we were unable to recover it. 00:27:16.084 [2024-11-20 07:23:20.520237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.084 [2024-11-20 07:23:20.520269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.084 qpair failed and we were unable to recover it. 00:27:16.084 [2024-11-20 07:23:20.520455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.084 [2024-11-20 07:23:20.520487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.084 qpair failed and we were unable to recover it. 00:27:16.084 [2024-11-20 07:23:20.520745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.084 [2024-11-20 07:23:20.520778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.084 qpair failed and we were unable to recover it. 00:27:16.084 [2024-11-20 07:23:20.520903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.085 [2024-11-20 07:23:20.520937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.085 qpair failed and we were unable to recover it. 00:27:16.085 [2024-11-20 07:23:20.521165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.085 [2024-11-20 07:23:20.521198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.085 qpair failed and we were unable to recover it. 00:27:16.085 [2024-11-20 07:23:20.521336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.085 [2024-11-20 07:23:20.521369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.085 qpair failed and we were unable to recover it. 00:27:16.085 [2024-11-20 07:23:20.521584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.085 [2024-11-20 07:23:20.521616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.085 qpair failed and we were unable to recover it. 00:27:16.085 [2024-11-20 07:23:20.521824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.085 [2024-11-20 07:23:20.521855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.085 qpair failed and we were unable to recover it. 00:27:16.085 [2024-11-20 07:23:20.522117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.085 [2024-11-20 07:23:20.522151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.085 qpair failed and we were unable to recover it. 00:27:16.085 [2024-11-20 07:23:20.522380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.085 [2024-11-20 07:23:20.522413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.085 qpair failed and we were unable to recover it. 00:27:16.085 [2024-11-20 07:23:20.522609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.085 [2024-11-20 07:23:20.522642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.085 qpair failed and we were unable to recover it. 00:27:16.085 [2024-11-20 07:23:20.522893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.085 [2024-11-20 07:23:20.522928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.085 qpair failed and we were unable to recover it. 00:27:16.085 [2024-11-20 07:23:20.523140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.085 [2024-11-20 07:23:20.523174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.085 qpair failed and we were unable to recover it. 00:27:16.085 [2024-11-20 07:23:20.523377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.085 [2024-11-20 07:23:20.523410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.085 qpair failed and we were unable to recover it. 00:27:16.085 [2024-11-20 07:23:20.523788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.085 [2024-11-20 07:23:20.523821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.085 qpair failed and we were unable to recover it. 00:27:16.085 [2024-11-20 07:23:20.524038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.085 [2024-11-20 07:23:20.524073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.085 qpair failed and we were unable to recover it. 00:27:16.085 [2024-11-20 07:23:20.524285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.085 [2024-11-20 07:23:20.524318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.085 qpair failed and we were unable to recover it. 00:27:16.085 [2024-11-20 07:23:20.524613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.085 [2024-11-20 07:23:20.524648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.085 qpair failed and we were unable to recover it. 00:27:16.085 [2024-11-20 07:23:20.524897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.085 [2024-11-20 07:23:20.524931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.085 qpair failed and we were unable to recover it. 00:27:16.085 [2024-11-20 07:23:20.525154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.085 [2024-11-20 07:23:20.525187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.085 qpair failed and we were unable to recover it. 00:27:16.085 [2024-11-20 07:23:20.525390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.085 [2024-11-20 07:23:20.525424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.085 qpair failed and we were unable to recover it. 00:27:16.085 [2024-11-20 07:23:20.525674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.085 [2024-11-20 07:23:20.525707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.085 qpair failed and we were unable to recover it. 00:27:16.085 [2024-11-20 07:23:20.525928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.085 [2024-11-20 07:23:20.525975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.085 qpair failed and we were unable to recover it. 00:27:16.085 [2024-11-20 07:23:20.526232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.085 [2024-11-20 07:23:20.526266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.085 qpair failed and we were unable to recover it. 00:27:16.085 [2024-11-20 07:23:20.526408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.085 [2024-11-20 07:23:20.526442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.085 qpair failed and we were unable to recover it. 00:27:16.085 [2024-11-20 07:23:20.526720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.085 [2024-11-20 07:23:20.526754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.085 qpair failed and we were unable to recover it. 00:27:16.085 [2024-11-20 07:23:20.526969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.085 [2024-11-20 07:23:20.527004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.085 qpair failed and we were unable to recover it. 00:27:16.085 [2024-11-20 07:23:20.527194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.085 [2024-11-20 07:23:20.527229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.085 qpair failed and we were unable to recover it. 00:27:16.085 [2024-11-20 07:23:20.527487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.085 [2024-11-20 07:23:20.527521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.085 qpair failed and we were unable to recover it. 00:27:16.085 [2024-11-20 07:23:20.527846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.085 [2024-11-20 07:23:20.527879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.085 qpair failed and we were unable to recover it. 00:27:16.085 [2024-11-20 07:23:20.528016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.085 [2024-11-20 07:23:20.528051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.085 qpair failed and we were unable to recover it. 00:27:16.085 [2024-11-20 07:23:20.528234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.085 [2024-11-20 07:23:20.528268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.085 qpair failed and we were unable to recover it. 00:27:16.085 [2024-11-20 07:23:20.528549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.086 [2024-11-20 07:23:20.528581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.086 qpair failed and we were unable to recover it. 00:27:16.086 [2024-11-20 07:23:20.528788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.086 [2024-11-20 07:23:20.528827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.086 qpair failed and we were unable to recover it. 00:27:16.086 [2024-11-20 07:23:20.529097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.086 [2024-11-20 07:23:20.529131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.086 qpair failed and we were unable to recover it. 00:27:16.086 [2024-11-20 07:23:20.529353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.086 [2024-11-20 07:23:20.529386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.086 qpair failed and we were unable to recover it. 00:27:16.086 [2024-11-20 07:23:20.529659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.086 [2024-11-20 07:23:20.529692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.086 qpair failed and we were unable to recover it. 00:27:16.086 [2024-11-20 07:23:20.529985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.086 [2024-11-20 07:23:20.530019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.086 qpair failed and we were unable to recover it. 00:27:16.086 [2024-11-20 07:23:20.530212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.086 [2024-11-20 07:23:20.530246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.086 qpair failed and we were unable to recover it. 00:27:16.086 [2024-11-20 07:23:20.530527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.086 [2024-11-20 07:23:20.530561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.086 qpair failed and we were unable to recover it. 00:27:16.086 [2024-11-20 07:23:20.530818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.086 [2024-11-20 07:23:20.530852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.086 qpair failed and we were unable to recover it. 00:27:16.086 [2024-11-20 07:23:20.531147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.086 [2024-11-20 07:23:20.531183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.086 qpair failed and we were unable to recover it. 00:27:16.086 [2024-11-20 07:23:20.531338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.086 [2024-11-20 07:23:20.531372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.086 qpair failed and we were unable to recover it. 00:27:16.086 [2024-11-20 07:23:20.531599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.086 [2024-11-20 07:23:20.531632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.086 qpair failed and we were unable to recover it. 00:27:16.086 [2024-11-20 07:23:20.531893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.086 [2024-11-20 07:23:20.531925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.086 qpair failed and we were unable to recover it. 00:27:16.086 [2024-11-20 07:23:20.532209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.086 [2024-11-20 07:23:20.532242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.086 qpair failed and we were unable to recover it. 00:27:16.086 [2024-11-20 07:23:20.532617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.086 [2024-11-20 07:23:20.532649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.086 qpair failed and we were unable to recover it. 00:27:16.086 [2024-11-20 07:23:20.532856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.086 [2024-11-20 07:23:20.532888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.086 qpair failed and we were unable to recover it. 00:27:16.086 [2024-11-20 07:23:20.533074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.086 [2024-11-20 07:23:20.533108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.086 qpair failed and we were unable to recover it. 00:27:16.086 [2024-11-20 07:23:20.533334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.086 [2024-11-20 07:23:20.533367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.086 qpair failed and we were unable to recover it. 00:27:16.086 [2024-11-20 07:23:20.533595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.086 [2024-11-20 07:23:20.533628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.086 qpair failed and we were unable to recover it. 00:27:16.086 [2024-11-20 07:23:20.533883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.086 [2024-11-20 07:23:20.533916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.086 qpair failed and we were unable to recover it. 00:27:16.086 [2024-11-20 07:23:20.534183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.086 [2024-11-20 07:23:20.534216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.086 qpair failed and we were unable to recover it. 00:27:16.086 [2024-11-20 07:23:20.534499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.086 [2024-11-20 07:23:20.534532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.086 qpair failed and we were unable to recover it. 00:27:16.086 [2024-11-20 07:23:20.534808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.086 [2024-11-20 07:23:20.534842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.086 qpair failed and we were unable to recover it. 00:27:16.086 [2024-11-20 07:23:20.535060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.086 [2024-11-20 07:23:20.535095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.086 qpair failed and we were unable to recover it. 00:27:16.086 [2024-11-20 07:23:20.535229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.086 [2024-11-20 07:23:20.535263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.086 qpair failed and we were unable to recover it. 00:27:16.086 [2024-11-20 07:23:20.535530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.086 [2024-11-20 07:23:20.535563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.086 qpair failed and we were unable to recover it. 00:27:16.086 [2024-11-20 07:23:20.535813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.086 [2024-11-20 07:23:20.535846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.086 qpair failed and we were unable to recover it. 00:27:16.086 [2024-11-20 07:23:20.536108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.086 [2024-11-20 07:23:20.536144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.086 qpair failed and we were unable to recover it. 00:27:16.086 [2024-11-20 07:23:20.536319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.086 [2024-11-20 07:23:20.536354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.086 qpair failed and we were unable to recover it. 00:27:16.086 [2024-11-20 07:23:20.536579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.086 [2024-11-20 07:23:20.536613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.086 qpair failed and we were unable to recover it. 00:27:16.086 [2024-11-20 07:23:20.536869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.086 [2024-11-20 07:23:20.536902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.086 qpair failed and we were unable to recover it. 00:27:16.086 [2024-11-20 07:23:20.537199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.086 [2024-11-20 07:23:20.537233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.087 qpair failed and we were unable to recover it. 00:27:16.087 [2024-11-20 07:23:20.537429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.087 [2024-11-20 07:23:20.537461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.087 qpair failed and we were unable to recover it. 00:27:16.087 [2024-11-20 07:23:20.537683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.087 [2024-11-20 07:23:20.537717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.087 qpair failed and we were unable to recover it. 00:27:16.087 [2024-11-20 07:23:20.538000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.087 [2024-11-20 07:23:20.538034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.087 qpair failed and we were unable to recover it. 00:27:16.087 [2024-11-20 07:23:20.538195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.087 [2024-11-20 07:23:20.538229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.087 qpair failed and we were unable to recover it. 00:27:16.087 [2024-11-20 07:23:20.538441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.087 [2024-11-20 07:23:20.538475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.087 qpair failed and we were unable to recover it. 00:27:16.087 [2024-11-20 07:23:20.538635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.087 [2024-11-20 07:23:20.538670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.087 qpair failed and we were unable to recover it. 00:27:16.087 [2024-11-20 07:23:20.538973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.087 [2024-11-20 07:23:20.539009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.087 qpair failed and we were unable to recover it. 00:27:16.087 [2024-11-20 07:23:20.539211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.087 [2024-11-20 07:23:20.539242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.087 qpair failed and we were unable to recover it. 00:27:16.087 [2024-11-20 07:23:20.539388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.087 [2024-11-20 07:23:20.539421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.087 qpair failed and we were unable to recover it. 00:27:16.087 [2024-11-20 07:23:20.539697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.087 [2024-11-20 07:23:20.539736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.087 qpair failed and we were unable to recover it. 00:27:16.087 [2024-11-20 07:23:20.539915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.087 [2024-11-20 07:23:20.539970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.087 qpair failed and we were unable to recover it. 00:27:16.087 [2024-11-20 07:23:20.540233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.087 [2024-11-20 07:23:20.540269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.087 qpair failed and we were unable to recover it. 00:27:16.087 [2024-11-20 07:23:20.540396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.087 [2024-11-20 07:23:20.540430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.087 qpair failed and we were unable to recover it. 00:27:16.087 [2024-11-20 07:23:20.540613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.087 [2024-11-20 07:23:20.540645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.087 qpair failed and we were unable to recover it. 00:27:16.087 [2024-11-20 07:23:20.540898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.087 [2024-11-20 07:23:20.540930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.087 qpair failed and we were unable to recover it. 00:27:16.087 [2024-11-20 07:23:20.541270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.087 [2024-11-20 07:23:20.541304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.087 qpair failed and we were unable to recover it. 00:27:16.087 [2024-11-20 07:23:20.541461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.087 [2024-11-20 07:23:20.541494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.087 qpair failed and we were unable to recover it. 00:27:16.087 [2024-11-20 07:23:20.541704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.087 [2024-11-20 07:23:20.541738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.087 qpair failed and we were unable to recover it. 00:27:16.087 [2024-11-20 07:23:20.542014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.087 [2024-11-20 07:23:20.542049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.087 qpair failed and we were unable to recover it. 00:27:16.087 [2024-11-20 07:23:20.542255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.087 [2024-11-20 07:23:20.542290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.087 qpair failed and we were unable to recover it. 00:27:16.087 [2024-11-20 07:23:20.542482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.087 [2024-11-20 07:23:20.542515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.087 qpair failed and we were unable to recover it. 00:27:16.087 [2024-11-20 07:23:20.542718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.087 [2024-11-20 07:23:20.542752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.087 qpair failed and we were unable to recover it. 00:27:16.087 [2024-11-20 07:23:20.542976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.087 [2024-11-20 07:23:20.543012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.087 qpair failed and we were unable to recover it. 00:27:16.087 [2024-11-20 07:23:20.543251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.087 [2024-11-20 07:23:20.543284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.087 qpair failed and we were unable to recover it. 00:27:16.087 [2024-11-20 07:23:20.543590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.087 [2024-11-20 07:23:20.543623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.087 qpair failed and we were unable to recover it. 00:27:16.087 [2024-11-20 07:23:20.543913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.087 [2024-11-20 07:23:20.543945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.087 qpair failed and we were unable to recover it. 00:27:16.087 [2024-11-20 07:23:20.544119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.087 [2024-11-20 07:23:20.544151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.087 qpair failed and we were unable to recover it. 00:27:16.087 [2024-11-20 07:23:20.544360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.087 [2024-11-20 07:23:20.544394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.087 qpair failed and we were unable to recover it. 00:27:16.087 [2024-11-20 07:23:20.544613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.088 [2024-11-20 07:23:20.544645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.088 qpair failed and we were unable to recover it. 00:27:16.088 [2024-11-20 07:23:20.544907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.088 [2024-11-20 07:23:20.544942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.088 qpair failed and we were unable to recover it. 00:27:16.088 [2024-11-20 07:23:20.545089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.088 [2024-11-20 07:23:20.545121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.088 qpair failed and we were unable to recover it. 00:27:16.088 [2024-11-20 07:23:20.545277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.088 [2024-11-20 07:23:20.545310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.088 qpair failed and we were unable to recover it. 00:27:16.088 [2024-11-20 07:23:20.545452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.088 [2024-11-20 07:23:20.545485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.088 qpair failed and we were unable to recover it. 00:27:16.088 [2024-11-20 07:23:20.545603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.088 [2024-11-20 07:23:20.545636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.088 qpair failed and we were unable to recover it. 00:27:16.088 [2024-11-20 07:23:20.545840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.088 [2024-11-20 07:23:20.545874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.088 qpair failed and we were unable to recover it. 00:27:16.088 [2024-11-20 07:23:20.546023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.088 [2024-11-20 07:23:20.546059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.088 qpair failed and we were unable to recover it. 00:27:16.088 [2024-11-20 07:23:20.546224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.088 [2024-11-20 07:23:20.546257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.088 qpair failed and we were unable to recover it. 00:27:16.088 [2024-11-20 07:23:20.546392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.088 [2024-11-20 07:23:20.546427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.088 qpair failed and we were unable to recover it. 00:27:16.088 [2024-11-20 07:23:20.546572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.088 [2024-11-20 07:23:20.546608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.088 qpair failed and we were unable to recover it. 00:27:16.088 [2024-11-20 07:23:20.546804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.088 [2024-11-20 07:23:20.546838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.088 qpair failed and we were unable to recover it. 00:27:16.088 [2024-11-20 07:23:20.547068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.088 [2024-11-20 07:23:20.547102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.088 qpair failed and we were unable to recover it. 00:27:16.088 [2024-11-20 07:23:20.547211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.088 [2024-11-20 07:23:20.547243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.088 qpair failed and we were unable to recover it. 00:27:16.088 [2024-11-20 07:23:20.547387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.088 [2024-11-20 07:23:20.547419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.088 qpair failed and we were unable to recover it. 00:27:16.088 [2024-11-20 07:23:20.547535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.088 [2024-11-20 07:23:20.547569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.088 qpair failed and we were unable to recover it. 00:27:16.088 [2024-11-20 07:23:20.547699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.088 [2024-11-20 07:23:20.547731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.088 qpair failed and we were unable to recover it. 00:27:16.088 [2024-11-20 07:23:20.548015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.088 [2024-11-20 07:23:20.548051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.088 qpair failed and we were unable to recover it. 00:27:16.088 [2024-11-20 07:23:20.548328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.088 [2024-11-20 07:23:20.548360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.088 qpair failed and we were unable to recover it. 00:27:16.088 [2024-11-20 07:23:20.548632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.088 [2024-11-20 07:23:20.548665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.088 qpair failed and we were unable to recover it. 00:27:16.088 [2024-11-20 07:23:20.548933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.088 [2024-11-20 07:23:20.548978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.088 qpair failed and we were unable to recover it. 00:27:16.088 [2024-11-20 07:23:20.549190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.088 [2024-11-20 07:23:20.549223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.088 qpair failed and we were unable to recover it. 00:27:16.088 [2024-11-20 07:23:20.549417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.088 [2024-11-20 07:23:20.549450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.088 qpair failed and we were unable to recover it. 00:27:16.088 [2024-11-20 07:23:20.549613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.088 [2024-11-20 07:23:20.549647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.088 qpair failed and we were unable to recover it. 00:27:16.088 [2024-11-20 07:23:20.549897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.088 [2024-11-20 07:23:20.549929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.088 qpair failed and we were unable to recover it. 00:27:16.088 [2024-11-20 07:23:20.550069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.088 [2024-11-20 07:23:20.550103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.088 qpair failed and we were unable to recover it. 00:27:16.088 [2024-11-20 07:23:20.550249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.088 [2024-11-20 07:23:20.550280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.088 qpair failed and we were unable to recover it. 00:27:16.089 [2024-11-20 07:23:20.550417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.089 [2024-11-20 07:23:20.550449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.089 qpair failed and we were unable to recover it. 00:27:16.089 [2024-11-20 07:23:20.550771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.089 [2024-11-20 07:23:20.550804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.089 qpair failed and we were unable to recover it. 00:27:16.089 [2024-11-20 07:23:20.550943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.089 [2024-11-20 07:23:20.550990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.089 qpair failed and we were unable to recover it. 00:27:16.089 [2024-11-20 07:23:20.551196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.089 [2024-11-20 07:23:20.551228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.089 qpair failed and we were unable to recover it. 00:27:16.089 [2024-11-20 07:23:20.551496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.089 [2024-11-20 07:23:20.551528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.089 qpair failed and we were unable to recover it. 00:27:16.089 [2024-11-20 07:23:20.551655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.089 [2024-11-20 07:23:20.551688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.089 qpair failed and we were unable to recover it. 00:27:16.089 [2024-11-20 07:23:20.551897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.089 [2024-11-20 07:23:20.551930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.089 qpair failed and we were unable to recover it. 00:27:16.089 [2024-11-20 07:23:20.552109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.089 [2024-11-20 07:23:20.552143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.089 qpair failed and we were unable to recover it. 00:27:16.089 [2024-11-20 07:23:20.552279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.089 [2024-11-20 07:23:20.552310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.089 qpair failed and we were unable to recover it. 00:27:16.089 [2024-11-20 07:23:20.552442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.089 [2024-11-20 07:23:20.552474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.089 qpair failed and we were unable to recover it. 00:27:16.089 [2024-11-20 07:23:20.552585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.089 [2024-11-20 07:23:20.552618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.089 qpair failed and we were unable to recover it. 00:27:16.089 [2024-11-20 07:23:20.552823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.089 [2024-11-20 07:23:20.552855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.089 qpair failed and we were unable to recover it. 00:27:16.089 [2024-11-20 07:23:20.552995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.089 [2024-11-20 07:23:20.553030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.089 qpair failed and we were unable to recover it. 00:27:16.089 [2024-11-20 07:23:20.553249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.089 [2024-11-20 07:23:20.553281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.089 qpair failed and we were unable to recover it. 00:27:16.089 [2024-11-20 07:23:20.553415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.089 [2024-11-20 07:23:20.553447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.089 qpair failed and we were unable to recover it. 00:27:16.089 [2024-11-20 07:23:20.553587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.089 [2024-11-20 07:23:20.553618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.089 qpair failed and we were unable to recover it. 00:27:16.089 [2024-11-20 07:23:20.553815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.089 [2024-11-20 07:23:20.553847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.089 qpair failed and we were unable to recover it. 00:27:16.089 [2024-11-20 07:23:20.554079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.089 [2024-11-20 07:23:20.554111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.089 qpair failed and we were unable to recover it. 00:27:16.089 [2024-11-20 07:23:20.554247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.089 [2024-11-20 07:23:20.554279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.089 qpair failed and we were unable to recover it. 00:27:16.089 [2024-11-20 07:23:20.554468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.089 [2024-11-20 07:23:20.554500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.089 qpair failed and we were unable to recover it. 00:27:16.089 [2024-11-20 07:23:20.554629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.089 [2024-11-20 07:23:20.554662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.089 qpair failed and we were unable to recover it. 00:27:16.089 [2024-11-20 07:23:20.554797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.089 [2024-11-20 07:23:20.554835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.089 qpair failed and we were unable to recover it. 00:27:16.089 [2024-11-20 07:23:20.555079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.089 [2024-11-20 07:23:20.555112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.089 qpair failed and we were unable to recover it. 00:27:16.089 [2024-11-20 07:23:20.555321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.089 [2024-11-20 07:23:20.555354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.089 qpair failed and we were unable to recover it. 00:27:16.089 [2024-11-20 07:23:20.555501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.089 [2024-11-20 07:23:20.555534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.089 qpair failed and we were unable to recover it. 00:27:16.089 [2024-11-20 07:23:20.555829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.089 [2024-11-20 07:23:20.555861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.089 qpair failed and we were unable to recover it. 00:27:16.089 [2024-11-20 07:23:20.556071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.089 [2024-11-20 07:23:20.556105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.089 qpair failed and we were unable to recover it. 00:27:16.089 [2024-11-20 07:23:20.556367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.089 [2024-11-20 07:23:20.556400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.089 qpair failed and we were unable to recover it. 00:27:16.089 [2024-11-20 07:23:20.556648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.089 [2024-11-20 07:23:20.556681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.089 qpair failed and we were unable to recover it. 00:27:16.089 [2024-11-20 07:23:20.556877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.089 [2024-11-20 07:23:20.556908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.089 qpair failed and we were unable to recover it. 00:27:16.089 [2024-11-20 07:23:20.557187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.089 [2024-11-20 07:23:20.557222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.089 qpair failed and we were unable to recover it. 00:27:16.089 [2024-11-20 07:23:20.557368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.090 [2024-11-20 07:23:20.557400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.090 qpair failed and we were unable to recover it. 00:27:16.090 [2024-11-20 07:23:20.557623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.090 [2024-11-20 07:23:20.557656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.090 qpair failed and we were unable to recover it. 00:27:16.090 [2024-11-20 07:23:20.557930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.090 [2024-11-20 07:23:20.557974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.090 qpair failed and we were unable to recover it. 00:27:16.090 [2024-11-20 07:23:20.558132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.090 [2024-11-20 07:23:20.558163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.090 qpair failed and we were unable to recover it. 00:27:16.090 [2024-11-20 07:23:20.558379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.090 [2024-11-20 07:23:20.558410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.090 qpair failed and we were unable to recover it. 00:27:16.090 [2024-11-20 07:23:20.558727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.090 [2024-11-20 07:23:20.558759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.090 qpair failed and we were unable to recover it. 00:27:16.090 [2024-11-20 07:23:20.559040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.090 [2024-11-20 07:23:20.559076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.090 qpair failed and we were unable to recover it. 00:27:16.090 [2024-11-20 07:23:20.559279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.090 [2024-11-20 07:23:20.559311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.090 qpair failed and we were unable to recover it. 00:27:16.090 [2024-11-20 07:23:20.559502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.090 [2024-11-20 07:23:20.559535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.090 qpair failed and we were unable to recover it. 00:27:16.090 [2024-11-20 07:23:20.559818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.090 [2024-11-20 07:23:20.559850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.090 qpair failed and we were unable to recover it. 00:27:16.090 [2024-11-20 07:23:20.560149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.090 [2024-11-20 07:23:20.560184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.090 qpair failed and we were unable to recover it. 00:27:16.090 [2024-11-20 07:23:20.560338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.090 [2024-11-20 07:23:20.560371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.090 qpair failed and we were unable to recover it. 00:27:16.090 [2024-11-20 07:23:20.560519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.090 [2024-11-20 07:23:20.560551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.090 qpair failed and we were unable to recover it. 00:27:16.090 [2024-11-20 07:23:20.560842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.090 [2024-11-20 07:23:20.560873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.090 qpair failed and we were unable to recover it. 00:27:16.090 [2024-11-20 07:23:20.561097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.090 [2024-11-20 07:23:20.561131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.090 qpair failed and we were unable to recover it. 00:27:16.090 [2024-11-20 07:23:20.561408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.090 [2024-11-20 07:23:20.561440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.090 qpair failed and we were unable to recover it. 00:27:16.090 [2024-11-20 07:23:20.561736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.090 [2024-11-20 07:23:20.561768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.090 qpair failed and we were unable to recover it. 00:27:16.090 [2024-11-20 07:23:20.561922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.090 [2024-11-20 07:23:20.561964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.090 qpair failed and we were unable to recover it. 00:27:16.090 [2024-11-20 07:23:20.562187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.090 [2024-11-20 07:23:20.562219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.090 qpair failed and we were unable to recover it. 00:27:16.090 [2024-11-20 07:23:20.562410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.090 [2024-11-20 07:23:20.562441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.090 qpair failed and we were unable to recover it. 00:27:16.090 [2024-11-20 07:23:20.562642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.090 [2024-11-20 07:23:20.562675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.090 qpair failed and we were unable to recover it. 00:27:16.090 [2024-11-20 07:23:20.562965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.090 [2024-11-20 07:23:20.562998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.090 qpair failed and we were unable to recover it. 00:27:16.090 [2024-11-20 07:23:20.563152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.090 [2024-11-20 07:23:20.563183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.090 qpair failed and we were unable to recover it. 00:27:16.090 [2024-11-20 07:23:20.563483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.090 [2024-11-20 07:23:20.563514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.090 qpair failed and we were unable to recover it. 00:27:16.090 [2024-11-20 07:23:20.563649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.090 [2024-11-20 07:23:20.563681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.090 qpair failed and we were unable to recover it. 00:27:16.090 [2024-11-20 07:23:20.563873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.090 [2024-11-20 07:23:20.563905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.090 qpair failed and we were unable to recover it. 00:27:16.090 [2024-11-20 07:23:20.564150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.090 [2024-11-20 07:23:20.564185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.090 qpair failed and we were unable to recover it. 00:27:16.090 [2024-11-20 07:23:20.564389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.090 [2024-11-20 07:23:20.564423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.090 qpair failed and we were unable to recover it. 00:27:16.090 [2024-11-20 07:23:20.564716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.090 [2024-11-20 07:23:20.564748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.090 qpair failed and we were unable to recover it. 00:27:16.090 [2024-11-20 07:23:20.565023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.090 [2024-11-20 07:23:20.565057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.090 qpair failed and we were unable to recover it. 00:27:16.090 [2024-11-20 07:23:20.565214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.090 [2024-11-20 07:23:20.565254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.090 qpair failed and we were unable to recover it. 00:27:16.090 [2024-11-20 07:23:20.565438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.090 [2024-11-20 07:23:20.565469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.091 qpair failed and we were unable to recover it. 00:27:16.091 [2024-11-20 07:23:20.565685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.091 [2024-11-20 07:23:20.565718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.091 qpair failed and we were unable to recover it. 00:27:16.091 [2024-11-20 07:23:20.565902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.091 [2024-11-20 07:23:20.565933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.091 qpair failed and we were unable to recover it. 00:27:16.091 [2024-11-20 07:23:20.566144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.091 [2024-11-20 07:23:20.566177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.091 qpair failed and we were unable to recover it. 00:27:16.091 [2024-11-20 07:23:20.566430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.091 [2024-11-20 07:23:20.566463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.091 qpair failed and we were unable to recover it. 00:27:16.091 [2024-11-20 07:23:20.566683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.091 [2024-11-20 07:23:20.566717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.091 qpair failed and we were unable to recover it. 00:27:16.091 [2024-11-20 07:23:20.566918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.091 [2024-11-20 07:23:20.566958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.091 qpair failed and we were unable to recover it. 00:27:16.091 [2024-11-20 07:23:20.567114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.091 [2024-11-20 07:23:20.567147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.091 qpair failed and we were unable to recover it. 00:27:16.091 [2024-11-20 07:23:20.567349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.091 [2024-11-20 07:23:20.567381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.091 qpair failed and we were unable to recover it. 00:27:16.091 [2024-11-20 07:23:20.567603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.091 [2024-11-20 07:23:20.567635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.091 qpair failed and we were unable to recover it. 00:27:16.091 [2024-11-20 07:23:20.567888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.091 [2024-11-20 07:23:20.567920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.091 qpair failed and we were unable to recover it. 00:27:16.091 [2024-11-20 07:23:20.568136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.091 [2024-11-20 07:23:20.568170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.091 qpair failed and we were unable to recover it. 00:27:16.091 [2024-11-20 07:23:20.568374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.091 [2024-11-20 07:23:20.568406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.091 qpair failed and we were unable to recover it. 00:27:16.091 [2024-11-20 07:23:20.568709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.091 [2024-11-20 07:23:20.568742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.091 qpair failed and we were unable to recover it. 00:27:16.091 [2024-11-20 07:23:20.569005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.091 [2024-11-20 07:23:20.569039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.091 qpair failed and we were unable to recover it. 00:27:16.091 [2024-11-20 07:23:20.569325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.091 [2024-11-20 07:23:20.569357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.091 qpair failed and we were unable to recover it. 00:27:16.091 [2024-11-20 07:23:20.569670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.091 [2024-11-20 07:23:20.569702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.091 qpair failed and we were unable to recover it. 00:27:16.091 [2024-11-20 07:23:20.569970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.091 [2024-11-20 07:23:20.570006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.091 qpair failed and we were unable to recover it. 00:27:16.091 [2024-11-20 07:23:20.570222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.091 [2024-11-20 07:23:20.570254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.091 qpair failed and we were unable to recover it. 00:27:16.091 [2024-11-20 07:23:20.570402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.091 [2024-11-20 07:23:20.570434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.091 qpair failed and we were unable to recover it. 00:27:16.091 [2024-11-20 07:23:20.570619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.091 [2024-11-20 07:23:20.570652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.091 qpair failed and we were unable to recover it. 00:27:16.091 [2024-11-20 07:23:20.570783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.091 [2024-11-20 07:23:20.570814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.091 qpair failed and we were unable to recover it. 00:27:16.091 [2024-11-20 07:23:20.571129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.091 [2024-11-20 07:23:20.571161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.091 qpair failed and we were unable to recover it. 00:27:16.091 [2024-11-20 07:23:20.571312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.091 [2024-11-20 07:23:20.571345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.091 qpair failed and we were unable to recover it. 00:27:16.091 [2024-11-20 07:23:20.571540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.091 [2024-11-20 07:23:20.571571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.091 qpair failed and we were unable to recover it. 00:27:16.091 [2024-11-20 07:23:20.571827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.091 [2024-11-20 07:23:20.571859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.091 qpair failed and we were unable to recover it. 00:27:16.091 [2024-11-20 07:23:20.572071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.091 [2024-11-20 07:23:20.572105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.091 qpair failed and we were unable to recover it. 00:27:16.091 [2024-11-20 07:23:20.572248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.091 [2024-11-20 07:23:20.572280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.091 qpair failed and we were unable to recover it. 00:27:16.091 [2024-11-20 07:23:20.572478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.091 [2024-11-20 07:23:20.572510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.091 qpair failed and we were unable to recover it. 00:27:16.091 [2024-11-20 07:23:20.572830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.091 [2024-11-20 07:23:20.572863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.091 qpair failed and we were unable to recover it. 00:27:16.091 [2024-11-20 07:23:20.573147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.091 [2024-11-20 07:23:20.573180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.091 qpair failed and we were unable to recover it. 00:27:16.091 [2024-11-20 07:23:20.573384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.091 [2024-11-20 07:23:20.573417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.091 qpair failed and we were unable to recover it. 00:27:16.091 [2024-11-20 07:23:20.573728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.092 [2024-11-20 07:23:20.573761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.092 qpair failed and we were unable to recover it. 00:27:16.092 [2024-11-20 07:23:20.573966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.092 [2024-11-20 07:23:20.573999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.092 qpair failed and we were unable to recover it. 00:27:16.092 [2024-11-20 07:23:20.574255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.092 [2024-11-20 07:23:20.574287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.092 qpair failed and we were unable to recover it. 00:27:16.092 [2024-11-20 07:23:20.574404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.092 [2024-11-20 07:23:20.574436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.092 qpair failed and we were unable to recover it. 00:27:16.092 [2024-11-20 07:23:20.574723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.092 [2024-11-20 07:23:20.574755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.092 qpair failed and we were unable to recover it. 00:27:16.092 [2024-11-20 07:23:20.574905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.092 [2024-11-20 07:23:20.574938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.092 qpair failed and we were unable to recover it. 00:27:16.092 [2024-11-20 07:23:20.575110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.092 [2024-11-20 07:23:20.575142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.092 qpair failed and we were unable to recover it. 00:27:16.092 [2024-11-20 07:23:20.575349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.092 [2024-11-20 07:23:20.575388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.092 qpair failed and we were unable to recover it. 00:27:16.092 [2024-11-20 07:23:20.575512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.092 [2024-11-20 07:23:20.575543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.092 qpair failed and we were unable to recover it. 00:27:16.092 [2024-11-20 07:23:20.575817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.092 [2024-11-20 07:23:20.575850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.092 qpair failed and we were unable to recover it. 00:27:16.092 [2024-11-20 07:23:20.576058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.092 [2024-11-20 07:23:20.576091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.092 qpair failed and we were unable to recover it. 00:27:16.092 [2024-11-20 07:23:20.576304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.092 [2024-11-20 07:23:20.576337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.092 qpair failed and we were unable to recover it. 00:27:16.092 [2024-11-20 07:23:20.576536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.092 [2024-11-20 07:23:20.576568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.092 qpair failed and we were unable to recover it. 00:27:16.092 [2024-11-20 07:23:20.576844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.092 [2024-11-20 07:23:20.576877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.092 qpair failed and we were unable to recover it. 00:27:16.092 [2024-11-20 07:23:20.577108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.092 [2024-11-20 07:23:20.577143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.092 qpair failed and we were unable to recover it. 00:27:16.092 [2024-11-20 07:23:20.577357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.092 [2024-11-20 07:23:20.577391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.092 qpair failed and we were unable to recover it. 00:27:16.092 [2024-11-20 07:23:20.577646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.092 [2024-11-20 07:23:20.577678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.092 qpair failed and we were unable to recover it. 00:27:16.092 [2024-11-20 07:23:20.577968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.092 [2024-11-20 07:23:20.578002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.092 qpair failed and we were unable to recover it. 00:27:16.092 [2024-11-20 07:23:20.578233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.092 [2024-11-20 07:23:20.578265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.092 qpair failed and we were unable to recover it. 00:27:16.092 [2024-11-20 07:23:20.578468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.092 [2024-11-20 07:23:20.578500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.092 qpair failed and we were unable to recover it. 00:27:16.092 [2024-11-20 07:23:20.578708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.092 [2024-11-20 07:23:20.578738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.092 qpair failed and we were unable to recover it. 00:27:16.092 [2024-11-20 07:23:20.578929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.092 [2024-11-20 07:23:20.578968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.092 qpair failed and we were unable to recover it. 00:27:16.092 [2024-11-20 07:23:20.579153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.092 [2024-11-20 07:23:20.579182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.092 qpair failed and we were unable to recover it. 00:27:16.092 [2024-11-20 07:23:20.579328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.092 [2024-11-20 07:23:20.579358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.092 qpair failed and we were unable to recover it. 00:27:16.092 [2024-11-20 07:23:20.579613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.092 [2024-11-20 07:23:20.579643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.092 qpair failed and we were unable to recover it. 00:27:16.092 [2024-11-20 07:23:20.579890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.092 [2024-11-20 07:23:20.579921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.092 qpair failed and we were unable to recover it. 00:27:16.092 [2024-11-20 07:23:20.580184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.092 [2024-11-20 07:23:20.580260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.092 qpair failed and we were unable to recover it. 00:27:16.092 [2024-11-20 07:23:20.580510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.092 [2024-11-20 07:23:20.580545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.092 qpair failed and we were unable to recover it. 00:27:16.092 [2024-11-20 07:23:20.580742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.092 [2024-11-20 07:23:20.580774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.092 qpair failed and we were unable to recover it. 00:27:16.092 [2024-11-20 07:23:20.581033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.092 [2024-11-20 07:23:20.581067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.092 qpair failed and we were unable to recover it. 00:27:16.092 [2024-11-20 07:23:20.581224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.092 [2024-11-20 07:23:20.581255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.092 qpair failed and we were unable to recover it. 00:27:16.092 [2024-11-20 07:23:20.581459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.093 [2024-11-20 07:23:20.581488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.093 qpair failed and we were unable to recover it. 00:27:16.093 [2024-11-20 07:23:20.581801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.093 [2024-11-20 07:23:20.581832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.093 qpair failed and we were unable to recover it. 00:27:16.093 [2024-11-20 07:23:20.582047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.093 [2024-11-20 07:23:20.582081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.093 qpair failed and we were unable to recover it. 00:27:16.093 [2024-11-20 07:23:20.582220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.093 [2024-11-20 07:23:20.582253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.093 qpair failed and we were unable to recover it. 00:27:16.093 [2024-11-20 07:23:20.582470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.093 [2024-11-20 07:23:20.582501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.093 qpair failed and we were unable to recover it. 00:27:16.093 [2024-11-20 07:23:20.582738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.093 [2024-11-20 07:23:20.582769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.093 qpair failed and we were unable to recover it. 00:27:16.093 [2024-11-20 07:23:20.583052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.093 [2024-11-20 07:23:20.583085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.093 qpair failed and we were unable to recover it. 00:27:16.093 [2024-11-20 07:23:20.583280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.093 [2024-11-20 07:23:20.583312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.093 qpair failed and we were unable to recover it. 00:27:16.093 [2024-11-20 07:23:20.583500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.093 [2024-11-20 07:23:20.583531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.093 qpair failed and we were unable to recover it. 00:27:16.093 [2024-11-20 07:23:20.583746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.093 [2024-11-20 07:23:20.583778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.093 qpair failed and we were unable to recover it. 00:27:16.093 [2024-11-20 07:23:20.583992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.093 [2024-11-20 07:23:20.584028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.093 qpair failed and we were unable to recover it. 00:27:16.093 [2024-11-20 07:23:20.584241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.093 [2024-11-20 07:23:20.584273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.093 qpair failed and we were unable to recover it. 00:27:16.093 [2024-11-20 07:23:20.584584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.093 [2024-11-20 07:23:20.584618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.093 qpair failed and we were unable to recover it. 00:27:16.093 [2024-11-20 07:23:20.584901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.093 [2024-11-20 07:23:20.584933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.093 qpair failed and we were unable to recover it. 00:27:16.093 [2024-11-20 07:23:20.585096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.093 [2024-11-20 07:23:20.585128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.093 qpair failed and we were unable to recover it. 00:27:16.093 [2024-11-20 07:23:20.585324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.093 [2024-11-20 07:23:20.585355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.093 qpair failed and we were unable to recover it. 00:27:16.093 [2024-11-20 07:23:20.585561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.093 [2024-11-20 07:23:20.585603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.093 qpair failed and we were unable to recover it. 00:27:16.093 [2024-11-20 07:23:20.585816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.093 [2024-11-20 07:23:20.585849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.093 qpair failed and we were unable to recover it. 00:27:16.093 [2024-11-20 07:23:20.586046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.093 [2024-11-20 07:23:20.586081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.093 qpair failed and we were unable to recover it. 00:27:16.093 [2024-11-20 07:23:20.586291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.093 [2024-11-20 07:23:20.586323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.093 qpair failed and we were unable to recover it. 00:27:16.093 [2024-11-20 07:23:20.586526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.093 [2024-11-20 07:23:20.586558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.093 qpair failed and we were unable to recover it. 00:27:16.093 [2024-11-20 07:23:20.586712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.093 [2024-11-20 07:23:20.586746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.093 qpair failed and we were unable to recover it. 00:27:16.093 [2024-11-20 07:23:20.586972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.093 [2024-11-20 07:23:20.587007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.374 qpair failed and we were unable to recover it. 00:27:16.374 [2024-11-20 07:23:20.587238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.374 [2024-11-20 07:23:20.587271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.374 qpair failed and we were unable to recover it. 00:27:16.374 [2024-11-20 07:23:20.587475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.374 [2024-11-20 07:23:20.587511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.374 qpair failed and we were unable to recover it. 00:27:16.374 [2024-11-20 07:23:20.587713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.374 [2024-11-20 07:23:20.587746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.374 qpair failed and we were unable to recover it. 00:27:16.374 [2024-11-20 07:23:20.588044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.374 [2024-11-20 07:23:20.588081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.374 qpair failed and we were unable to recover it. 00:27:16.374 [2024-11-20 07:23:20.588351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-11-20 07:23:20.588385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-11-20 07:23:20.588527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-11-20 07:23:20.588560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-11-20 07:23:20.588778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-11-20 07:23:20.588811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-11-20 07:23:20.589086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-11-20 07:23:20.589122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-11-20 07:23:20.589373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-11-20 07:23:20.589405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-11-20 07:23:20.589682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-11-20 07:23:20.589716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-11-20 07:23:20.589922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-11-20 07:23:20.589969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-11-20 07:23:20.590239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-11-20 07:23:20.590273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-11-20 07:23:20.590483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-11-20 07:23:20.590518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-11-20 07:23:20.590780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-11-20 07:23:20.590813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-11-20 07:23:20.591067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-11-20 07:23:20.591102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-11-20 07:23:20.591356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-11-20 07:23:20.591389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-11-20 07:23:20.591623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-11-20 07:23:20.591656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-11-20 07:23:20.591912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-11-20 07:23:20.591959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-11-20 07:23:20.592262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-11-20 07:23:20.592295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-11-20 07:23:20.592450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-11-20 07:23:20.592483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-11-20 07:23:20.592701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-11-20 07:23:20.592735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-11-20 07:23:20.593075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-11-20 07:23:20.593109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-11-20 07:23:20.593311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-11-20 07:23:20.593345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-11-20 07:23:20.593555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-11-20 07:23:20.593588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-11-20 07:23:20.593843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-11-20 07:23:20.593876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-11-20 07:23:20.594189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-11-20 07:23:20.594224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-11-20 07:23:20.594432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-11-20 07:23:20.594465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-11-20 07:23:20.594683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-11-20 07:23:20.594717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-11-20 07:23:20.594912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-11-20 07:23:20.594946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-11-20 07:23:20.595106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-11-20 07:23:20.595138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-11-20 07:23:20.595348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-11-20 07:23:20.595384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-11-20 07:23:20.595590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-11-20 07:23:20.595623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-11-20 07:23:20.595821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-11-20 07:23:20.595856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-11-20 07:23:20.596109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-11-20 07:23:20.596150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-11-20 07:23:20.596294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.375 [2024-11-20 07:23:20.596328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.375 qpair failed and we were unable to recover it. 00:27:16.375 [2024-11-20 07:23:20.596482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-11-20 07:23:20.596517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-11-20 07:23:20.596835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-11-20 07:23:20.596871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-11-20 07:23:20.597140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-11-20 07:23:20.597176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-11-20 07:23:20.597388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-11-20 07:23:20.597422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-11-20 07:23:20.597626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-11-20 07:23:20.597661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-11-20 07:23:20.597914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-11-20 07:23:20.597959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-11-20 07:23:20.598123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-11-20 07:23:20.598156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-11-20 07:23:20.598416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-11-20 07:23:20.598449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-11-20 07:23:20.598586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-11-20 07:23:20.598620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-11-20 07:23:20.598832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-11-20 07:23:20.598863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-11-20 07:23:20.599112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-11-20 07:23:20.599146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-11-20 07:23:20.599353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-11-20 07:23:20.599386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-11-20 07:23:20.599541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-11-20 07:23:20.599576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-11-20 07:23:20.599763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-11-20 07:23:20.599797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-11-20 07:23:20.600015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-11-20 07:23:20.600051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-11-20 07:23:20.600258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-11-20 07:23:20.600293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-11-20 07:23:20.600601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-11-20 07:23:20.600632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-11-20 07:23:20.600828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-11-20 07:23:20.600862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-11-20 07:23:20.601143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-11-20 07:23:20.601180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-11-20 07:23:20.601405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-11-20 07:23:20.601438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-11-20 07:23:20.601662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-11-20 07:23:20.601696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-11-20 07:23:20.601993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-11-20 07:23:20.602028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-11-20 07:23:20.602252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-11-20 07:23:20.602286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-11-20 07:23:20.602481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-11-20 07:23:20.602514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-11-20 07:23:20.602838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-11-20 07:23:20.602871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-11-20 07:23:20.603160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-11-20 07:23:20.603194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-11-20 07:23:20.603403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-11-20 07:23:20.603436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-11-20 07:23:20.603579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-11-20 07:23:20.603612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-11-20 07:23:20.603741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-11-20 07:23:20.603777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-11-20 07:23:20.603927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-11-20 07:23:20.603970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-11-20 07:23:20.604157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-11-20 07:23:20.604189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.376 [2024-11-20 07:23:20.604450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.376 [2024-11-20 07:23:20.604484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.376 qpair failed and we were unable to recover it. 00:27:16.377 [2024-11-20 07:23:20.604711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-11-20 07:23:20.604743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-11-20 07:23:20.604931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-11-20 07:23:20.604976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-11-20 07:23:20.605206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-11-20 07:23:20.605240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-11-20 07:23:20.605443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-11-20 07:23:20.605477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-11-20 07:23:20.605784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-11-20 07:23:20.605818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-11-20 07:23:20.606029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-11-20 07:23:20.606084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-11-20 07:23:20.606298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-11-20 07:23:20.606337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-11-20 07:23:20.606551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-11-20 07:23:20.606584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-11-20 07:23:20.606879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-11-20 07:23:20.606912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-11-20 07:23:20.607088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-11-20 07:23:20.607120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-11-20 07:23:20.607317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-11-20 07:23:20.607349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-11-20 07:23:20.607490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-11-20 07:23:20.607523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-11-20 07:23:20.607706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-11-20 07:23:20.607738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-11-20 07:23:20.607970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-11-20 07:23:20.608005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-11-20 07:23:20.608205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-11-20 07:23:20.608241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-11-20 07:23:20.608373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-11-20 07:23:20.608405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-11-20 07:23:20.608654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-11-20 07:23:20.608687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-11-20 07:23:20.608873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-11-20 07:23:20.608909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-11-20 07:23:20.609154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-11-20 07:23:20.609190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-11-20 07:23:20.609414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-11-20 07:23:20.609455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-11-20 07:23:20.609680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-11-20 07:23:20.609714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-11-20 07:23:20.610000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-11-20 07:23:20.610035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-11-20 07:23:20.610192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-11-20 07:23:20.610225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-11-20 07:23:20.610450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-11-20 07:23:20.610485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-11-20 07:23:20.610701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-11-20 07:23:20.610742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-11-20 07:23:20.610923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-11-20 07:23:20.610978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-11-20 07:23:20.611185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-11-20 07:23:20.611220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-11-20 07:23:20.611426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-11-20 07:23:20.611459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-11-20 07:23:20.611802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-11-20 07:23:20.611835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-11-20 07:23:20.612006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-11-20 07:23:20.612041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.377 qpair failed and we were unable to recover it. 00:27:16.377 [2024-11-20 07:23:20.612224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.377 [2024-11-20 07:23:20.612259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-11-20 07:23:20.612452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-11-20 07:23:20.612484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-11-20 07:23:20.612692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-11-20 07:23:20.612726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-11-20 07:23:20.613011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-11-20 07:23:20.613046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-11-20 07:23:20.613193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-11-20 07:23:20.613224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-11-20 07:23:20.613424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-11-20 07:23:20.613458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-11-20 07:23:20.613686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-11-20 07:23:20.613720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-11-20 07:23:20.614004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-11-20 07:23:20.614038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-11-20 07:23:20.614294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-11-20 07:23:20.614328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-11-20 07:23:20.614507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-11-20 07:23:20.614539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-11-20 07:23:20.614674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-11-20 07:23:20.614707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-11-20 07:23:20.614982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-11-20 07:23:20.615019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-11-20 07:23:20.615155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-11-20 07:23:20.615188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-11-20 07:23:20.615464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-11-20 07:23:20.615497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-11-20 07:23:20.615625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-11-20 07:23:20.615658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-11-20 07:23:20.615864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-11-20 07:23:20.615898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-11-20 07:23:20.616226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-11-20 07:23:20.616268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-11-20 07:23:20.616420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-11-20 07:23:20.616453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-11-20 07:23:20.616719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-11-20 07:23:20.616752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-11-20 07:23:20.617012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-11-20 07:23:20.617047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-11-20 07:23:20.617254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-11-20 07:23:20.617288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-11-20 07:23:20.617544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-11-20 07:23:20.617577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-11-20 07:23:20.617849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-11-20 07:23:20.617882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-11-20 07:23:20.618023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-11-20 07:23:20.618058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-11-20 07:23:20.618205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-11-20 07:23:20.618238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-11-20 07:23:20.618364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-11-20 07:23:20.618398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-11-20 07:23:20.618556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-11-20 07:23:20.618590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-11-20 07:23:20.618713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-11-20 07:23:20.618746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-11-20 07:23:20.619018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-11-20 07:23:20.619053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-11-20 07:23:20.619268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-11-20 07:23:20.619300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-11-20 07:23:20.619514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-11-20 07:23:20.619547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.378 [2024-11-20 07:23:20.619857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.378 [2024-11-20 07:23:20.619893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.378 qpair failed and we were unable to recover it. 00:27:16.379 [2024-11-20 07:23:20.620083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-11-20 07:23:20.620118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-11-20 07:23:20.620384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-11-20 07:23:20.620417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-11-20 07:23:20.620738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-11-20 07:23:20.620772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-11-20 07:23:20.620888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-11-20 07:23:20.620922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-11-20 07:23:20.621105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-11-20 07:23:20.621139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-11-20 07:23:20.621342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-11-20 07:23:20.621374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-11-20 07:23:20.621730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-11-20 07:23:20.621764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-11-20 07:23:20.621970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-11-20 07:23:20.622005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-11-20 07:23:20.622284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-11-20 07:23:20.622318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-11-20 07:23:20.622456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-11-20 07:23:20.622489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-11-20 07:23:20.622817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-11-20 07:23:20.622851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-11-20 07:23:20.623121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-11-20 07:23:20.623156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-11-20 07:23:20.623309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-11-20 07:23:20.623343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-11-20 07:23:20.623546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-11-20 07:23:20.623580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-11-20 07:23:20.623855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-11-20 07:23:20.623889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-11-20 07:23:20.624045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-11-20 07:23:20.624080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-11-20 07:23:20.624331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-11-20 07:23:20.624364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-11-20 07:23:20.624664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-11-20 07:23:20.624698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-11-20 07:23:20.624921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-11-20 07:23:20.624962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-11-20 07:23:20.625178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-11-20 07:23:20.625210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-11-20 07:23:20.625419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-11-20 07:23:20.625453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-11-20 07:23:20.625716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-11-20 07:23:20.625749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-11-20 07:23:20.625945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-11-20 07:23:20.625987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-11-20 07:23:20.626200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-11-20 07:23:20.626233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-11-20 07:23:20.626508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-11-20 07:23:20.626554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-11-20 07:23:20.626856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-11-20 07:23:20.626888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-11-20 07:23:20.627123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-11-20 07:23:20.627159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-11-20 07:23:20.627365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-11-20 07:23:20.627397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-11-20 07:23:20.627664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-11-20 07:23:20.627696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-11-20 07:23:20.627895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-11-20 07:23:20.627927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-11-20 07:23:20.628150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.379 [2024-11-20 07:23:20.628183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.379 qpair failed and we were unable to recover it. 00:27:16.379 [2024-11-20 07:23:20.628384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-11-20 07:23:20.628416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-11-20 07:23:20.628545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-11-20 07:23:20.628577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-11-20 07:23:20.628800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-11-20 07:23:20.628832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-11-20 07:23:20.629040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-11-20 07:23:20.629075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-11-20 07:23:20.629360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-11-20 07:23:20.629393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-11-20 07:23:20.629679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-11-20 07:23:20.629712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-11-20 07:23:20.629923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-11-20 07:23:20.629962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-11-20 07:23:20.630188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-11-20 07:23:20.630220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-11-20 07:23:20.630424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-11-20 07:23:20.630456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-11-20 07:23:20.630776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-11-20 07:23:20.630808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-11-20 07:23:20.631003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-11-20 07:23:20.631037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-11-20 07:23:20.631299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-11-20 07:23:20.631332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-11-20 07:23:20.631604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-11-20 07:23:20.631636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-11-20 07:23:20.631921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-11-20 07:23:20.631963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-11-20 07:23:20.632105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-11-20 07:23:20.632137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-11-20 07:23:20.632278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-11-20 07:23:20.632309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-11-20 07:23:20.632589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-11-20 07:23:20.632621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-11-20 07:23:20.632802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-11-20 07:23:20.632834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-11-20 07:23:20.633089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-11-20 07:23:20.633123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-11-20 07:23:20.633342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-11-20 07:23:20.633375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-11-20 07:23:20.633608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-11-20 07:23:20.633641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-11-20 07:23:20.633921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-11-20 07:23:20.633960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-11-20 07:23:20.634107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-11-20 07:23:20.634139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-11-20 07:23:20.634373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-11-20 07:23:20.634407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-11-20 07:23:20.634633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-11-20 07:23:20.634666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-11-20 07:23:20.634934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-11-20 07:23:20.634975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.380 [2024-11-20 07:23:20.635120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.380 [2024-11-20 07:23:20.635152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.380 qpair failed and we were unable to recover it. 00:27:16.381 [2024-11-20 07:23:20.635404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-11-20 07:23:20.635436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-11-20 07:23:20.635711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-11-20 07:23:20.635745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-11-20 07:23:20.635989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-11-20 07:23:20.636022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-11-20 07:23:20.636231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-11-20 07:23:20.636264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-11-20 07:23:20.636468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-11-20 07:23:20.636501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-11-20 07:23:20.636789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-11-20 07:23:20.636821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-11-20 07:23:20.637008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-11-20 07:23:20.637049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-11-20 07:23:20.637248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-11-20 07:23:20.637281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-11-20 07:23:20.637536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-11-20 07:23:20.637569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-11-20 07:23:20.637771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-11-20 07:23:20.637803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-11-20 07:23:20.638001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-11-20 07:23:20.638035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-11-20 07:23:20.638255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-11-20 07:23:20.638289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-11-20 07:23:20.638482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-11-20 07:23:20.638515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-11-20 07:23:20.638795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-11-20 07:23:20.638828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-11-20 07:23:20.638989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-11-20 07:23:20.639023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-11-20 07:23:20.639174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-11-20 07:23:20.639207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-11-20 07:23:20.639410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-11-20 07:23:20.639443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-11-20 07:23:20.639664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-11-20 07:23:20.639696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-11-20 07:23:20.640025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-11-20 07:23:20.640058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-11-20 07:23:20.640316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-11-20 07:23:20.640351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-11-20 07:23:20.640581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-11-20 07:23:20.640621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-11-20 07:23:20.640933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-11-20 07:23:20.640975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-11-20 07:23:20.641271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-11-20 07:23:20.641306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-11-20 07:23:20.641521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-11-20 07:23:20.641554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-11-20 07:23:20.641849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-11-20 07:23:20.641881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-11-20 07:23:20.642152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-11-20 07:23:20.642188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-11-20 07:23:20.642389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-11-20 07:23:20.642422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-11-20 07:23:20.642692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-11-20 07:23:20.642724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-11-20 07:23:20.642862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-11-20 07:23:20.642896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-11-20 07:23:20.643192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-11-20 07:23:20.643226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-11-20 07:23:20.643504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.381 [2024-11-20 07:23:20.643537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.381 qpair failed and we were unable to recover it. 00:27:16.381 [2024-11-20 07:23:20.643758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-11-20 07:23:20.643790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-11-20 07:23:20.643997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-11-20 07:23:20.644032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-11-20 07:23:20.644246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-11-20 07:23:20.644282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-11-20 07:23:20.644488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-11-20 07:23:20.644521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-11-20 07:23:20.644773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-11-20 07:23:20.644806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-11-20 07:23:20.645006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-11-20 07:23:20.645042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-11-20 07:23:20.645298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-11-20 07:23:20.645330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-11-20 07:23:20.645600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-11-20 07:23:20.645634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-11-20 07:23:20.645864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-11-20 07:23:20.645897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-11-20 07:23:20.646119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-11-20 07:23:20.646152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-11-20 07:23:20.646304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-11-20 07:23:20.646337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-11-20 07:23:20.646479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-11-20 07:23:20.646511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-11-20 07:23:20.646803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-11-20 07:23:20.646835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-11-20 07:23:20.647038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-11-20 07:23:20.647072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-11-20 07:23:20.647292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-11-20 07:23:20.647325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-11-20 07:23:20.647532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-11-20 07:23:20.647571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-11-20 07:23:20.647718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-11-20 07:23:20.647751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-11-20 07:23:20.647872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-11-20 07:23:20.647904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-11-20 07:23:20.648134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-11-20 07:23:20.648169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-11-20 07:23:20.648368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-11-20 07:23:20.648403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-11-20 07:23:20.648656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-11-20 07:23:20.648689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-11-20 07:23:20.648896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-11-20 07:23:20.648928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-11-20 07:23:20.649071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-11-20 07:23:20.649106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-11-20 07:23:20.649312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-11-20 07:23:20.649346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-11-20 07:23:20.649545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-11-20 07:23:20.649580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-11-20 07:23:20.649805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-11-20 07:23:20.649839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-11-20 07:23:20.650095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-11-20 07:23:20.650129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-11-20 07:23:20.650351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-11-20 07:23:20.650383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-11-20 07:23:20.650599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-11-20 07:23:20.650635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-11-20 07:23:20.650833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-11-20 07:23:20.650866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-11-20 07:23:20.651111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-11-20 07:23:20.651147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.382 [2024-11-20 07:23:20.651357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.382 [2024-11-20 07:23:20.651390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.382 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 07:23:20.651593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 07:23:20.651628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 07:23:20.651810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 07:23:20.651842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 07:23:20.652099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 07:23:20.652134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 07:23:20.652288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 07:23:20.652321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 07:23:20.652465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 07:23:20.652499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 07:23:20.652777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 07:23:20.652810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 07:23:20.653016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 07:23:20.653051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 07:23:20.653307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 07:23:20.653338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 07:23:20.653547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 07:23:20.653580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 07:23:20.653854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 07:23:20.653889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 07:23:20.654177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 07:23:20.654211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 07:23:20.654421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 07:23:20.654457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 07:23:20.654753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 07:23:20.654787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 07:23:20.655077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 07:23:20.655111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 07:23:20.655296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 07:23:20.655330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 07:23:20.655535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 07:23:20.655568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 07:23:20.655830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 07:23:20.655865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 07:23:20.656156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 07:23:20.656191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 07:23:20.656488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 07:23:20.656520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 07:23:20.656802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 07:23:20.656836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 07:23:20.657093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 07:23:20.657127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 07:23:20.657402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 07:23:20.657434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 07:23:20.657739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 07:23:20.657772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 07:23:20.657995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 07:23:20.658035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 07:23:20.658191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 07:23:20.658223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 07:23:20.658426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 07:23:20.658459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 07:23:20.658670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 07:23:20.658702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 07:23:20.658837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 07:23:20.658869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 07:23:20.659123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 07:23:20.659157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 07:23:20.659467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 07:23:20.659499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 07:23:20.659641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.383 [2024-11-20 07:23:20.659674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.383 qpair failed and we were unable to recover it. 00:27:16.383 [2024-11-20 07:23:20.659830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 07:23:20.659862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 07:23:20.660141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 07:23:20.660174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 07:23:20.660425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 07:23:20.660457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 07:23:20.660684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 07:23:20.660716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 07:23:20.660975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 07:23:20.661009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 07:23:20.661286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 07:23:20.661318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 07:23:20.661532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 07:23:20.661564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 07:23:20.661842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 07:23:20.661874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 07:23:20.662125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 07:23:20.662159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 07:23:20.662362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 07:23:20.662395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 07:23:20.662569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 07:23:20.662600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 07:23:20.662796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 07:23:20.662828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 07:23:20.663148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 07:23:20.663182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 07:23:20.663380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 07:23:20.663411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 07:23:20.663694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 07:23:20.663726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 07:23:20.663922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 07:23:20.663965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 07:23:20.664233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 07:23:20.664266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 07:23:20.664410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 07:23:20.664442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 07:23:20.664569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 07:23:20.664601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 07:23:20.664812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 07:23:20.664845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 07:23:20.665059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 07:23:20.665092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 07:23:20.665297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 07:23:20.665330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 07:23:20.665590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 07:23:20.665622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 07:23:20.665816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 07:23:20.665847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 07:23:20.666089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 07:23:20.666123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 07:23:20.666318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 07:23:20.666350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 07:23:20.666652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 07:23:20.666685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 07:23:20.666970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 07:23:20.667005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 07:23:20.667204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 07:23:20.667237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 07:23:20.667363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 07:23:20.667395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 07:23:20.667668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 07:23:20.667701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.384 [2024-11-20 07:23:20.667967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.384 [2024-11-20 07:23:20.668002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.384 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 07:23:20.668241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 07:23:20.668279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 07:23:20.668475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 07:23:20.668508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 07:23:20.668725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 07:23:20.668759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 07:23:20.668986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 07:23:20.669020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 07:23:20.669177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 07:23:20.669210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 07:23:20.669432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 07:23:20.669465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 07:23:20.669719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 07:23:20.669751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 07:23:20.669954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 07:23:20.669988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 07:23:20.670193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 07:23:20.670226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 07:23:20.670440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 07:23:20.670474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 07:23:20.670680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 07:23:20.670714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 07:23:20.670919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 07:23:20.670960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 07:23:20.671095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 07:23:20.671127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 07:23:20.671398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 07:23:20.671431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 07:23:20.671573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 07:23:20.671606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 07:23:20.671738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 07:23:20.671769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 07:23:20.671986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 07:23:20.672021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 07:23:20.672222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 07:23:20.672255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 07:23:20.672523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 07:23:20.672556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 07:23:20.672822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 07:23:20.672855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 07:23:20.673092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 07:23:20.673125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 07:23:20.673274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 07:23:20.673306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 07:23:20.673610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 07:23:20.673643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 07:23:20.673909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 07:23:20.673942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 07:23:20.674183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 07:23:20.674217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 07:23:20.674415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 07:23:20.674447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 07:23:20.674580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 07:23:20.674612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.385 [2024-11-20 07:23:20.674767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.385 [2024-11-20 07:23:20.674801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.385 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 07:23:20.674931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-11-20 07:23:20.674972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 07:23:20.675176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-11-20 07:23:20.675208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 07:23:20.675432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-11-20 07:23:20.675464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 07:23:20.675745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-11-20 07:23:20.675780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 07:23:20.676039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-11-20 07:23:20.676074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 07:23:20.676298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-11-20 07:23:20.676331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 07:23:20.676531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-11-20 07:23:20.676564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 07:23:20.676773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-11-20 07:23:20.676805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 07:23:20.677078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-11-20 07:23:20.677113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 07:23:20.677268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-11-20 07:23:20.677301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 07:23:20.677545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-11-20 07:23:20.677577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 07:23:20.677758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-11-20 07:23:20.677791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 07:23:20.678049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-11-20 07:23:20.678083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 07:23:20.678290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-11-20 07:23:20.678324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 07:23:20.678516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-11-20 07:23:20.678548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 07:23:20.678828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-11-20 07:23:20.678862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 07:23:20.679162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-11-20 07:23:20.679197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 07:23:20.679456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-11-20 07:23:20.679489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 07:23:20.679822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-11-20 07:23:20.679855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 07:23:20.680088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-11-20 07:23:20.680123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 07:23:20.680339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-11-20 07:23:20.680373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 07:23:20.680622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-11-20 07:23:20.680655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 07:23:20.680906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-11-20 07:23:20.680939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 07:23:20.681196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-11-20 07:23:20.681230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 07:23:20.681371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-11-20 07:23:20.681403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 07:23:20.681538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-11-20 07:23:20.681572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 07:23:20.681834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-11-20 07:23:20.681866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 07:23:20.682061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.386 [2024-11-20 07:23:20.682096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.386 qpair failed and we were unable to recover it. 00:27:16.386 [2024-11-20 07:23:20.682311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-11-20 07:23:20.682345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-11-20 07:23:20.682496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-11-20 07:23:20.682528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-11-20 07:23:20.682769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-11-20 07:23:20.682801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-11-20 07:23:20.682983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-11-20 07:23:20.683017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-11-20 07:23:20.683226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-11-20 07:23:20.683259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-11-20 07:23:20.683487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-11-20 07:23:20.683519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-11-20 07:23:20.683807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-11-20 07:23:20.683841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-11-20 07:23:20.684046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-11-20 07:23:20.684081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-11-20 07:23:20.684232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-11-20 07:23:20.684265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-11-20 07:23:20.684493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-11-20 07:23:20.684527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-11-20 07:23:20.684732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-11-20 07:23:20.684765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-11-20 07:23:20.684980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-11-20 07:23:20.685020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-11-20 07:23:20.685215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-11-20 07:23:20.685248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-11-20 07:23:20.685381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-11-20 07:23:20.685413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-11-20 07:23:20.685557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-11-20 07:23:20.685590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-11-20 07:23:20.685884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-11-20 07:23:20.685916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-11-20 07:23:20.686129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-11-20 07:23:20.686164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-11-20 07:23:20.686445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-11-20 07:23:20.686479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-11-20 07:23:20.686696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-11-20 07:23:20.686729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-11-20 07:23:20.686983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-11-20 07:23:20.687018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-11-20 07:23:20.687237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-11-20 07:23:20.687271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-11-20 07:23:20.687526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-11-20 07:23:20.687558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-11-20 07:23:20.687698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-11-20 07:23:20.687731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-11-20 07:23:20.688018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.387 [2024-11-20 07:23:20.688052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.387 qpair failed and we were unable to recover it. 00:27:16.387 [2024-11-20 07:23:20.688257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-11-20 07:23:20.688291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-11-20 07:23:20.688442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-11-20 07:23:20.688475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-11-20 07:23:20.688770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-11-20 07:23:20.688803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-11-20 07:23:20.689052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-11-20 07:23:20.689087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-11-20 07:23:20.689306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-11-20 07:23:20.689340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-11-20 07:23:20.689471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-11-20 07:23:20.689504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-11-20 07:23:20.689700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-11-20 07:23:20.689732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-11-20 07:23:20.689964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-11-20 07:23:20.689999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-11-20 07:23:20.690279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-11-20 07:23:20.690311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-11-20 07:23:20.690535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-11-20 07:23:20.690569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-11-20 07:23:20.690843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-11-20 07:23:20.690876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-11-20 07:23:20.691256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-11-20 07:23:20.691290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-11-20 07:23:20.691447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-11-20 07:23:20.691481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-11-20 07:23:20.691733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-11-20 07:23:20.691766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-11-20 07:23:20.691904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-11-20 07:23:20.691935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-11-20 07:23:20.692203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-11-20 07:23:20.692237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-11-20 07:23:20.692468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-11-20 07:23:20.692502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-11-20 07:23:20.692769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-11-20 07:23:20.692801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-11-20 07:23:20.692998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-11-20 07:23:20.693033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-11-20 07:23:20.693234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-11-20 07:23:20.693268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-11-20 07:23:20.693469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-11-20 07:23:20.693502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-11-20 07:23:20.693786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-11-20 07:23:20.693819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-11-20 07:23:20.694095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-11-20 07:23:20.694130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-11-20 07:23:20.694335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-11-20 07:23:20.694367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-11-20 07:23:20.694497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-11-20 07:23:20.694530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-11-20 07:23:20.694769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-11-20 07:23:20.694803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.388 [2024-11-20 07:23:20.695074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.388 [2024-11-20 07:23:20.695109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.388 qpair failed and we were unable to recover it. 00:27:16.389 [2024-11-20 07:23:20.695331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-11-20 07:23:20.695370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-11-20 07:23:20.695575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-11-20 07:23:20.695607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-11-20 07:23:20.695885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-11-20 07:23:20.695918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-11-20 07:23:20.696205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-11-20 07:23:20.696238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-11-20 07:23:20.696435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-11-20 07:23:20.696467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-11-20 07:23:20.696781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-11-20 07:23:20.696814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-11-20 07:23:20.697125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-11-20 07:23:20.697159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-11-20 07:23:20.697410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-11-20 07:23:20.697442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-11-20 07:23:20.697787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-11-20 07:23:20.697821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-11-20 07:23:20.697982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-11-20 07:23:20.698015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-11-20 07:23:20.698168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-11-20 07:23:20.698202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-11-20 07:23:20.698481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-11-20 07:23:20.698514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-11-20 07:23:20.698646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-11-20 07:23:20.698679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-11-20 07:23:20.698932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-11-20 07:23:20.698974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-11-20 07:23:20.699262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-11-20 07:23:20.699296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-11-20 07:23:20.699567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-11-20 07:23:20.699600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-11-20 07:23:20.699891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-11-20 07:23:20.699923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-11-20 07:23:20.700081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-11-20 07:23:20.700115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-11-20 07:23:20.700326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-11-20 07:23:20.700359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-11-20 07:23:20.700660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-11-20 07:23:20.700693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-11-20 07:23:20.700895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-11-20 07:23:20.700927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-11-20 07:23:20.701137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-11-20 07:23:20.701171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-11-20 07:23:20.701327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-11-20 07:23:20.701360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-11-20 07:23:20.701563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-11-20 07:23:20.701596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-11-20 07:23:20.701872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-11-20 07:23:20.701906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-11-20 07:23:20.702219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-11-20 07:23:20.702253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-11-20 07:23:20.702534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-11-20 07:23:20.702566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-11-20 07:23:20.702712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-11-20 07:23:20.702745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-11-20 07:23:20.703020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-11-20 07:23:20.703054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-11-20 07:23:20.703306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-11-20 07:23:20.703340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-11-20 07:23:20.703526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.389 [2024-11-20 07:23:20.703559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.389 qpair failed and we were unable to recover it. 00:27:16.389 [2024-11-20 07:23:20.703835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-11-20 07:23:20.703867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-11-20 07:23:20.704096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-11-20 07:23:20.704131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-11-20 07:23:20.704325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-11-20 07:23:20.704359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-11-20 07:23:20.704634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-11-20 07:23:20.704667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-11-20 07:23:20.704806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-11-20 07:23:20.704839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-11-20 07:23:20.705083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-11-20 07:23:20.705117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-11-20 07:23:20.705311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-11-20 07:23:20.705344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-11-20 07:23:20.705533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-11-20 07:23:20.705566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-11-20 07:23:20.705841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-11-20 07:23:20.705873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-11-20 07:23:20.706099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-11-20 07:23:20.706139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-11-20 07:23:20.706277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-11-20 07:23:20.706309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-11-20 07:23:20.706559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-11-20 07:23:20.706593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-11-20 07:23:20.706789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-11-20 07:23:20.706822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-11-20 07:23:20.706972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-11-20 07:23:20.707006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-11-20 07:23:20.707312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-11-20 07:23:20.707346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-11-20 07:23:20.707624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-11-20 07:23:20.707656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-11-20 07:23:20.707857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-11-20 07:23:20.707890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-11-20 07:23:20.708169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-11-20 07:23:20.708203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-11-20 07:23:20.708398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-11-20 07:23:20.708431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-11-20 07:23:20.708668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-11-20 07:23:20.708702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-11-20 07:23:20.708857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-11-20 07:23:20.708890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-11-20 07:23:20.709188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-11-20 07:23:20.709222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-11-20 07:23:20.709338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-11-20 07:23:20.709371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-11-20 07:23:20.709574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-11-20 07:23:20.709608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-11-20 07:23:20.709793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-11-20 07:23:20.709826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-11-20 07:23:20.709983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-11-20 07:23:20.710038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-11-20 07:23:20.710251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-11-20 07:23:20.710283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-11-20 07:23:20.710469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-11-20 07:23:20.710503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-11-20 07:23:20.710713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-11-20 07:23:20.710745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-11-20 07:23:20.711004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-11-20 07:23:20.711039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-11-20 07:23:20.711236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-11-20 07:23:20.711268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.390 [2024-11-20 07:23:20.711405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.390 [2024-11-20 07:23:20.711438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.390 qpair failed and we were unable to recover it. 00:27:16.391 [2024-11-20 07:23:20.711649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-11-20 07:23:20.711682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-11-20 07:23:20.711915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-11-20 07:23:20.711954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-11-20 07:23:20.712184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-11-20 07:23:20.712218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-11-20 07:23:20.712349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-11-20 07:23:20.712381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-11-20 07:23:20.712654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-11-20 07:23:20.712687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-11-20 07:23:20.712816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-11-20 07:23:20.712850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-11-20 07:23:20.713076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-11-20 07:23:20.713110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-11-20 07:23:20.713251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-11-20 07:23:20.713284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-11-20 07:23:20.713561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-11-20 07:23:20.713595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-11-20 07:23:20.713793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-11-20 07:23:20.713826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-11-20 07:23:20.714021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-11-20 07:23:20.714055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-11-20 07:23:20.714309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-11-20 07:23:20.714342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-11-20 07:23:20.714543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-11-20 07:23:20.714575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-11-20 07:23:20.714832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-11-20 07:23:20.714866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-11-20 07:23:20.715120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-11-20 07:23:20.715154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-11-20 07:23:20.715349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-11-20 07:23:20.715382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-11-20 07:23:20.715667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-11-20 07:23:20.715700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-11-20 07:23:20.716000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-11-20 07:23:20.716041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-11-20 07:23:20.716304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-11-20 07:23:20.716337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-11-20 07:23:20.716615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-11-20 07:23:20.716649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-11-20 07:23:20.716842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-11-20 07:23:20.716874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-11-20 07:23:20.717012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-11-20 07:23:20.717046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-11-20 07:23:20.717202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-11-20 07:23:20.717234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-11-20 07:23:20.717382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-11-20 07:23:20.717415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-11-20 07:23:20.717704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-11-20 07:23:20.717738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-11-20 07:23:20.717935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-11-20 07:23:20.717995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.391 qpair failed and we were unable to recover it. 00:27:16.391 [2024-11-20 07:23:20.718203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.391 [2024-11-20 07:23:20.718236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-11-20 07:23:20.718486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-11-20 07:23:20.718519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-11-20 07:23:20.718720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-11-20 07:23:20.718753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-11-20 07:23:20.719030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-11-20 07:23:20.719066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-11-20 07:23:20.719269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-11-20 07:23:20.719302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-11-20 07:23:20.719619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-11-20 07:23:20.719651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-11-20 07:23:20.719851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-11-20 07:23:20.719883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-11-20 07:23:20.720163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-11-20 07:23:20.720196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-11-20 07:23:20.720395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-11-20 07:23:20.720428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-11-20 07:23:20.720691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-11-20 07:23:20.720724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-11-20 07:23:20.720976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-11-20 07:23:20.721010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-11-20 07:23:20.721159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-11-20 07:23:20.721192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-11-20 07:23:20.721393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-11-20 07:23:20.721425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-11-20 07:23:20.721711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-11-20 07:23:20.721744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-11-20 07:23:20.721973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-11-20 07:23:20.722008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-11-20 07:23:20.722207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-11-20 07:23:20.722240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-11-20 07:23:20.722434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-11-20 07:23:20.722465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-11-20 07:23:20.722780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-11-20 07:23:20.722813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-11-20 07:23:20.723100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-11-20 07:23:20.723135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-11-20 07:23:20.723278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-11-20 07:23:20.723311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-11-20 07:23:20.723533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-11-20 07:23:20.723565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-11-20 07:23:20.723777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-11-20 07:23:20.723809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-11-20 07:23:20.724040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-11-20 07:23:20.724074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-11-20 07:23:20.724302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-11-20 07:23:20.724335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-11-20 07:23:20.724531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-11-20 07:23:20.724563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-11-20 07:23:20.724850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-11-20 07:23:20.724883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-11-20 07:23:20.725166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-11-20 07:23:20.725200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-11-20 07:23:20.725401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-11-20 07:23:20.725434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-11-20 07:23:20.725652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-11-20 07:23:20.725685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-11-20 07:23:20.725810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-11-20 07:23:20.725842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-11-20 07:23:20.726122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-11-20 07:23:20.726156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-11-20 07:23:20.726352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.392 [2024-11-20 07:23:20.726393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.392 qpair failed and we were unable to recover it. 00:27:16.392 [2024-11-20 07:23:20.726617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-11-20 07:23:20.726649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-11-20 07:23:20.726901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-11-20 07:23:20.726934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-11-20 07:23:20.727271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-11-20 07:23:20.727305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-11-20 07:23:20.727586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-11-20 07:23:20.727620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-11-20 07:23:20.727821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-11-20 07:23:20.727853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-11-20 07:23:20.728072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-11-20 07:23:20.728107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-11-20 07:23:20.728360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-11-20 07:23:20.728392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-11-20 07:23:20.728521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-11-20 07:23:20.728555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-11-20 07:23:20.728751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-11-20 07:23:20.728784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-11-20 07:23:20.729089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-11-20 07:23:20.729125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-11-20 07:23:20.729324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-11-20 07:23:20.729357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-11-20 07:23:20.729558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-11-20 07:23:20.729591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-11-20 07:23:20.729778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-11-20 07:23:20.729811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-11-20 07:23:20.730103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-11-20 07:23:20.730139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-11-20 07:23:20.730277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-11-20 07:23:20.730310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-11-20 07:23:20.730454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-11-20 07:23:20.730487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-11-20 07:23:20.730770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-11-20 07:23:20.730803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-11-20 07:23:20.731093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-11-20 07:23:20.731128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-11-20 07:23:20.731402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-11-20 07:23:20.731435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-11-20 07:23:20.731730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-11-20 07:23:20.731763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-11-20 07:23:20.731961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-11-20 07:23:20.731996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-11-20 07:23:20.732188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-11-20 07:23:20.732221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-11-20 07:23:20.732464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-11-20 07:23:20.732497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-11-20 07:23:20.732718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-11-20 07:23:20.732751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-11-20 07:23:20.733041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-11-20 07:23:20.733076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-11-20 07:23:20.733289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-11-20 07:23:20.733322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-11-20 07:23:20.733532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-11-20 07:23:20.733564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-11-20 07:23:20.733825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-11-20 07:23:20.733858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-11-20 07:23:20.734081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-11-20 07:23:20.734115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-11-20 07:23:20.734368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-11-20 07:23:20.734402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-11-20 07:23:20.734719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-11-20 07:23:20.734753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.393 qpair failed and we were unable to recover it. 00:27:16.393 [2024-11-20 07:23:20.734968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.393 [2024-11-20 07:23:20.735001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-11-20 07:23:20.735206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-11-20 07:23:20.735239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-11-20 07:23:20.735388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-11-20 07:23:20.735422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-11-20 07:23:20.735653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-11-20 07:23:20.735684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-11-20 07:23:20.735938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-11-20 07:23:20.735980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-11-20 07:23:20.736165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-11-20 07:23:20.736198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-11-20 07:23:20.736420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-11-20 07:23:20.736453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-11-20 07:23:20.736715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-11-20 07:23:20.736749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-11-20 07:23:20.736931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-11-20 07:23:20.736980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-11-20 07:23:20.737184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-11-20 07:23:20.737217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-11-20 07:23:20.737414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-11-20 07:23:20.737447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-11-20 07:23:20.737724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-11-20 07:23:20.737758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-11-20 07:23:20.738018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-11-20 07:23:20.738053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-11-20 07:23:20.738210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-11-20 07:23:20.738243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-11-20 07:23:20.738424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-11-20 07:23:20.738456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-11-20 07:23:20.738676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-11-20 07:23:20.738709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-11-20 07:23:20.738986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-11-20 07:23:20.739021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-11-20 07:23:20.739267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-11-20 07:23:20.739299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-11-20 07:23:20.739554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-11-20 07:23:20.739588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-11-20 07:23:20.739903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-11-20 07:23:20.739935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-11-20 07:23:20.740148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-11-20 07:23:20.740180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-11-20 07:23:20.740410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-11-20 07:23:20.740444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-11-20 07:23:20.740731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-11-20 07:23:20.740765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-11-20 07:23:20.741048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-11-20 07:23:20.741082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-11-20 07:23:20.741236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-11-20 07:23:20.741269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-11-20 07:23:20.741419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-11-20 07:23:20.741452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-11-20 07:23:20.741713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-11-20 07:23:20.741746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-11-20 07:23:20.741974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-11-20 07:23:20.742009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-11-20 07:23:20.742207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-11-20 07:23:20.742238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-11-20 07:23:20.742512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-11-20 07:23:20.742545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-11-20 07:23:20.742830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-11-20 07:23:20.742864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-11-20 07:23:20.743143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.394 [2024-11-20 07:23:20.743177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.394 qpair failed and we were unable to recover it. 00:27:16.394 [2024-11-20 07:23:20.743339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-11-20 07:23:20.743373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-11-20 07:23:20.743491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-11-20 07:23:20.743523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-11-20 07:23:20.743802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-11-20 07:23:20.743834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-11-20 07:23:20.744111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-11-20 07:23:20.744144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-11-20 07:23:20.744425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-11-20 07:23:20.744458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-11-20 07:23:20.744763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-11-20 07:23:20.744794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-11-20 07:23:20.745022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-11-20 07:23:20.745057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-11-20 07:23:20.745337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-11-20 07:23:20.745370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-11-20 07:23:20.745629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-11-20 07:23:20.745662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-11-20 07:23:20.745983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-11-20 07:23:20.746018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-11-20 07:23:20.746282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-11-20 07:23:20.746315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-11-20 07:23:20.746594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-11-20 07:23:20.746626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-11-20 07:23:20.746853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-11-20 07:23:20.746886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-11-20 07:23:20.747185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-11-20 07:23:20.747219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-11-20 07:23:20.747413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-11-20 07:23:20.747447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-11-20 07:23:20.747705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-11-20 07:23:20.747738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-11-20 07:23:20.748051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-11-20 07:23:20.748093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-11-20 07:23:20.748226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-11-20 07:23:20.748257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-11-20 07:23:20.748531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-11-20 07:23:20.748564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-11-20 07:23:20.748717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-11-20 07:23:20.748749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-11-20 07:23:20.749050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-11-20 07:23:20.749084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-11-20 07:23:20.749367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-11-20 07:23:20.749402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-11-20 07:23:20.749639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-11-20 07:23:20.749672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-11-20 07:23:20.749867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-11-20 07:23:20.749900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-11-20 07:23:20.750242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-11-20 07:23:20.750275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-11-20 07:23:20.750471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-11-20 07:23:20.750504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-11-20 07:23:20.750788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-11-20 07:23:20.750820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-11-20 07:23:20.751104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-11-20 07:23:20.751140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-11-20 07:23:20.751422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-11-20 07:23:20.751455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-11-20 07:23:20.751657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-11-20 07:23:20.751691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-11-20 07:23:20.751903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-11-20 07:23:20.751937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.395 qpair failed and we were unable to recover it. 00:27:16.395 [2024-11-20 07:23:20.752091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.395 [2024-11-20 07:23:20.752124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-11-20 07:23:20.752377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-11-20 07:23:20.752410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-11-20 07:23:20.752552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-11-20 07:23:20.752584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-11-20 07:23:20.752779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-11-20 07:23:20.752813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-11-20 07:23:20.753019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-11-20 07:23:20.753054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-11-20 07:23:20.753192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-11-20 07:23:20.753226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-11-20 07:23:20.753520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-11-20 07:23:20.753552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-11-20 07:23:20.753816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-11-20 07:23:20.753849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-11-20 07:23:20.754129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-11-20 07:23:20.754162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-11-20 07:23:20.754323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-11-20 07:23:20.754355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-11-20 07:23:20.754547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-11-20 07:23:20.754580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-11-20 07:23:20.754870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-11-20 07:23:20.754902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-11-20 07:23:20.755119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-11-20 07:23:20.755153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-11-20 07:23:20.755403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-11-20 07:23:20.755437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-11-20 07:23:20.755695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-11-20 07:23:20.755729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-11-20 07:23:20.755919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-11-20 07:23:20.755959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-11-20 07:23:20.756240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-11-20 07:23:20.756274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-11-20 07:23:20.756576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-11-20 07:23:20.756609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-11-20 07:23:20.756745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-11-20 07:23:20.756777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-11-20 07:23:20.756996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-11-20 07:23:20.757031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-11-20 07:23:20.757155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-11-20 07:23:20.757188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-11-20 07:23:20.757409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-11-20 07:23:20.757442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-11-20 07:23:20.757592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-11-20 07:23:20.757624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-11-20 07:23:20.757838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-11-20 07:23:20.757871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-11-20 07:23:20.758137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-11-20 07:23:20.758172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-11-20 07:23:20.758320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-11-20 07:23:20.758358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-11-20 07:23:20.758513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-11-20 07:23:20.758544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-11-20 07:23:20.758739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-11-20 07:23:20.758772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-11-20 07:23:20.759039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-11-20 07:23:20.759073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-11-20 07:23:20.759335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-11-20 07:23:20.759367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-11-20 07:23:20.759647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-11-20 07:23:20.759681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-11-20 07:23:20.759887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.396 [2024-11-20 07:23:20.759921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.396 qpair failed and we were unable to recover it. 00:27:16.396 [2024-11-20 07:23:20.760190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-11-20 07:23:20.760224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-11-20 07:23:20.760429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-11-20 07:23:20.760462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-11-20 07:23:20.760754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-11-20 07:23:20.760786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-11-20 07:23:20.760993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-11-20 07:23:20.761028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-11-20 07:23:20.761237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-11-20 07:23:20.761269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-11-20 07:23:20.761494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-11-20 07:23:20.761527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-11-20 07:23:20.761713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-11-20 07:23:20.761746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-11-20 07:23:20.761892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-11-20 07:23:20.761926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-11-20 07:23:20.762087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-11-20 07:23:20.762119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-11-20 07:23:20.762320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-11-20 07:23:20.762354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-11-20 07:23:20.762493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-11-20 07:23:20.762527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-11-20 07:23:20.762841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-11-20 07:23:20.762874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-11-20 07:23:20.763069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-11-20 07:23:20.763104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-11-20 07:23:20.763239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-11-20 07:23:20.763273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-11-20 07:23:20.763463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-11-20 07:23:20.763496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-11-20 07:23:20.763757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-11-20 07:23:20.763791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-11-20 07:23:20.763927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-11-20 07:23:20.763968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-11-20 07:23:20.764101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-11-20 07:23:20.764134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-11-20 07:23:20.764257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-11-20 07:23:20.764290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-11-20 07:23:20.764545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-11-20 07:23:20.764578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-11-20 07:23:20.764809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-11-20 07:23:20.764843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-11-20 07:23:20.764989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-11-20 07:23:20.765023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-11-20 07:23:20.765170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-11-20 07:23:20.765203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-11-20 07:23:20.765422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-11-20 07:23:20.765453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-11-20 07:23:20.765686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-11-20 07:23:20.765720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-11-20 07:23:20.766025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-11-20 07:23:20.766059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-11-20 07:23:20.766324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.397 [2024-11-20 07:23:20.766357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.397 qpair failed and we were unable to recover it. 00:27:16.397 [2024-11-20 07:23:20.766680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.398 [2024-11-20 07:23:20.766714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.398 qpair failed and we were unable to recover it. 00:27:16.398 [2024-11-20 07:23:20.766938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.398 [2024-11-20 07:23:20.766998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.398 qpair failed and we were unable to recover it. 00:27:16.398 [2024-11-20 07:23:20.767206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.398 [2024-11-20 07:23:20.767240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.398 qpair failed and we were unable to recover it. 00:27:16.398 [2024-11-20 07:23:20.767393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.398 [2024-11-20 07:23:20.767426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.398 qpair failed and we were unable to recover it. 00:27:16.398 [2024-11-20 07:23:20.767687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.398 [2024-11-20 07:23:20.767720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.398 qpair failed and we were unable to recover it. 00:27:16.398 [2024-11-20 07:23:20.767929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.398 [2024-11-20 07:23:20.767971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.398 qpair failed and we were unable to recover it. 00:27:16.398 [2024-11-20 07:23:20.768171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.398 [2024-11-20 07:23:20.768210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.398 qpair failed and we were unable to recover it. 00:27:16.398 [2024-11-20 07:23:20.768488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.398 [2024-11-20 07:23:20.768521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.398 qpair failed and we were unable to recover it. 00:27:16.398 [2024-11-20 07:23:20.768805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.398 [2024-11-20 07:23:20.768839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.398 qpair failed and we were unable to recover it. 00:27:16.398 [2024-11-20 07:23:20.769172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.398 [2024-11-20 07:23:20.769207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.398 qpair failed and we were unable to recover it. 00:27:16.398 [2024-11-20 07:23:20.769407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.398 [2024-11-20 07:23:20.769438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.398 qpair failed and we were unable to recover it. 00:27:16.398 [2024-11-20 07:23:20.769647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.398 [2024-11-20 07:23:20.769681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.398 qpair failed and we were unable to recover it. 00:27:16.398 [2024-11-20 07:23:20.769902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.398 [2024-11-20 07:23:20.769935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.398 qpair failed and we were unable to recover it. 00:27:16.398 [2024-11-20 07:23:20.770167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.398 [2024-11-20 07:23:20.770200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.398 qpair failed and we were unable to recover it. 00:27:16.398 [2024-11-20 07:23:20.770319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.398 [2024-11-20 07:23:20.770352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.398 qpair failed and we were unable to recover it. 00:27:16.398 [2024-11-20 07:23:20.770510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.398 [2024-11-20 07:23:20.770543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.398 qpair failed and we were unable to recover it. 00:27:16.398 [2024-11-20 07:23:20.770671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.398 [2024-11-20 07:23:20.770703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.398 qpair failed and we were unable to recover it. 00:27:16.398 [2024-11-20 07:23:20.770976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.398 [2024-11-20 07:23:20.771010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.398 qpair failed and we were unable to recover it. 00:27:16.398 [2024-11-20 07:23:20.771239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.398 [2024-11-20 07:23:20.771272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.398 qpair failed and we were unable to recover it. 00:27:16.398 [2024-11-20 07:23:20.771500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.398 [2024-11-20 07:23:20.771533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.398 qpair failed and we were unable to recover it. 00:27:16.398 [2024-11-20 07:23:20.771681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.398 [2024-11-20 07:23:20.771714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.398 qpair failed and we were unable to recover it. 00:27:16.398 [2024-11-20 07:23:20.771842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.398 [2024-11-20 07:23:20.771875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.398 qpair failed and we were unable to recover it. 00:27:16.398 [2024-11-20 07:23:20.772129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.398 [2024-11-20 07:23:20.772164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.398 qpair failed and we were unable to recover it. 00:27:16.398 [2024-11-20 07:23:20.772299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.398 [2024-11-20 07:23:20.772331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.398 qpair failed and we were unable to recover it. 00:27:16.398 [2024-11-20 07:23:20.772527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.398 [2024-11-20 07:23:20.772560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.398 qpair failed and we were unable to recover it. 00:27:16.398 [2024-11-20 07:23:20.772696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.398 [2024-11-20 07:23:20.772730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.398 qpair failed and we were unable to recover it. 00:27:16.398 [2024-11-20 07:23:20.773007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.398 [2024-11-20 07:23:20.773042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.398 qpair failed and we were unable to recover it. 00:27:16.398 [2024-11-20 07:23:20.773250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.398 [2024-11-20 07:23:20.773282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.398 qpair failed and we were unable to recover it. 00:27:16.398 [2024-11-20 07:23:20.773549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.398 [2024-11-20 07:23:20.773582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.398 qpair failed and we were unable to recover it. 00:27:16.398 [2024-11-20 07:23:20.773838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.398 [2024-11-20 07:23:20.773872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.398 qpair failed and we were unable to recover it. 00:27:16.398 [2024-11-20 07:23:20.774184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.398 [2024-11-20 07:23:20.774218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.398 qpair failed and we were unable to recover it. 00:27:16.398 [2024-11-20 07:23:20.774403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.398 [2024-11-20 07:23:20.774435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.398 qpair failed and we were unable to recover it. 00:27:16.399 [2024-11-20 07:23:20.774714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.399 [2024-11-20 07:23:20.774747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.399 qpair failed and we were unable to recover it. 00:27:16.399 [2024-11-20 07:23:20.775032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.399 [2024-11-20 07:23:20.775066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.399 qpair failed and we were unable to recover it. 00:27:16.399 [2024-11-20 07:23:20.775213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.399 [2024-11-20 07:23:20.775246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.399 qpair failed and we were unable to recover it. 00:27:16.399 [2024-11-20 07:23:20.775448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.399 [2024-11-20 07:23:20.775481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.399 qpair failed and we were unable to recover it. 00:27:16.399 [2024-11-20 07:23:20.775618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.399 [2024-11-20 07:23:20.775651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.399 qpair failed and we were unable to recover it. 00:27:16.399 [2024-11-20 07:23:20.775856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.399 [2024-11-20 07:23:20.775889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.399 qpair failed and we were unable to recover it. 00:27:16.399 [2024-11-20 07:23:20.776097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.399 [2024-11-20 07:23:20.776131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.399 qpair failed and we were unable to recover it. 00:27:16.399 [2024-11-20 07:23:20.776386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.399 [2024-11-20 07:23:20.776419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.399 qpair failed and we were unable to recover it. 00:27:16.399 [2024-11-20 07:23:20.776624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.399 [2024-11-20 07:23:20.776657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.399 qpair failed and we were unable to recover it. 00:27:16.399 [2024-11-20 07:23:20.776940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.399 [2024-11-20 07:23:20.776986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.399 qpair failed and we were unable to recover it. 00:27:16.399 [2024-11-20 07:23:20.777191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.399 [2024-11-20 07:23:20.777224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.399 qpair failed and we were unable to recover it. 00:27:16.399 [2024-11-20 07:23:20.777377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.399 [2024-11-20 07:23:20.777411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.399 qpair failed and we were unable to recover it. 00:27:16.399 [2024-11-20 07:23:20.777713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.399 [2024-11-20 07:23:20.777746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.399 qpair failed and we were unable to recover it. 00:27:16.399 [2024-11-20 07:23:20.778026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.399 [2024-11-20 07:23:20.778061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.399 qpair failed and we were unable to recover it. 00:27:16.399 [2024-11-20 07:23:20.778249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.399 [2024-11-20 07:23:20.778288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.399 qpair failed and we were unable to recover it. 00:27:16.399 [2024-11-20 07:23:20.778492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.399 [2024-11-20 07:23:20.778524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.399 qpair failed and we were unable to recover it. 00:27:16.399 [2024-11-20 07:23:20.778672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.399 [2024-11-20 07:23:20.778706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.399 qpair failed and we were unable to recover it. 00:27:16.399 [2024-11-20 07:23:20.778988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.399 [2024-11-20 07:23:20.779023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.399 qpair failed and we were unable to recover it. 00:27:16.399 [2024-11-20 07:23:20.779162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.399 [2024-11-20 07:23:20.779194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.399 qpair failed and we were unable to recover it. 00:27:16.399 [2024-11-20 07:23:20.779396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.399 [2024-11-20 07:23:20.779429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.399 qpair failed and we were unable to recover it. 00:27:16.399 [2024-11-20 07:23:20.779652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.399 [2024-11-20 07:23:20.779686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.399 qpair failed and we were unable to recover it. 00:27:16.399 [2024-11-20 07:23:20.779831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.399 [2024-11-20 07:23:20.779864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.399 qpair failed and we were unable to recover it. 00:27:16.399 [2024-11-20 07:23:20.780062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.399 [2024-11-20 07:23:20.780096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.399 qpair failed and we were unable to recover it. 00:27:16.399 [2024-11-20 07:23:20.780377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.399 [2024-11-20 07:23:20.780410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.399 qpair failed and we were unable to recover it. 00:27:16.399 [2024-11-20 07:23:20.780558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.399 [2024-11-20 07:23:20.780591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.399 qpair failed and we were unable to recover it. 00:27:16.399 [2024-11-20 07:23:20.780787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.399 [2024-11-20 07:23:20.780820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.399 qpair failed and we were unable to recover it. 00:27:16.399 [2024-11-20 07:23:20.781124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.399 [2024-11-20 07:23:20.781158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.399 qpair failed and we were unable to recover it. 00:27:16.399 [2024-11-20 07:23:20.781412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.399 [2024-11-20 07:23:20.781446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.399 qpair failed and we were unable to recover it. 00:27:16.399 [2024-11-20 07:23:20.781668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.399 [2024-11-20 07:23:20.781700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.399 qpair failed and we were unable to recover it. 00:27:16.399 [2024-11-20 07:23:20.781998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.399 [2024-11-20 07:23:20.782034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.399 qpair failed and we were unable to recover it. 00:27:16.399 [2024-11-20 07:23:20.782176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.399 [2024-11-20 07:23:20.782210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.399 qpair failed and we were unable to recover it. 00:27:16.399 [2024-11-20 07:23:20.782411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.400 [2024-11-20 07:23:20.782445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.400 qpair failed and we were unable to recover it. 00:27:16.400 [2024-11-20 07:23:20.782745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.400 [2024-11-20 07:23:20.782778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.400 qpair failed and we were unable to recover it. 00:27:16.400 [2024-11-20 07:23:20.782924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.400 [2024-11-20 07:23:20.782980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.400 qpair failed and we were unable to recover it. 00:27:16.400 [2024-11-20 07:23:20.783263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.400 [2024-11-20 07:23:20.783296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.400 qpair failed and we were unable to recover it. 00:27:16.400 [2024-11-20 07:23:20.783547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.400 [2024-11-20 07:23:20.783578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.400 qpair failed and we were unable to recover it. 00:27:16.400 [2024-11-20 07:23:20.783852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.400 [2024-11-20 07:23:20.783885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.400 qpair failed and we were unable to recover it. 00:27:16.400 [2024-11-20 07:23:20.784088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.400 [2024-11-20 07:23:20.784121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.400 qpair failed and we were unable to recover it. 00:27:16.400 [2024-11-20 07:23:20.784317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.400 [2024-11-20 07:23:20.784350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.400 qpair failed and we were unable to recover it. 00:27:16.400 [2024-11-20 07:23:20.784543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.400 [2024-11-20 07:23:20.784575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.400 qpair failed and we were unable to recover it. 00:27:16.400 [2024-11-20 07:23:20.784808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.400 [2024-11-20 07:23:20.784841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.400 qpair failed and we were unable to recover it. 00:27:16.400 [2024-11-20 07:23:20.785133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.400 [2024-11-20 07:23:20.785167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.400 qpair failed and we were unable to recover it. 00:27:16.400 [2024-11-20 07:23:20.785355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.400 [2024-11-20 07:23:20.785389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.400 qpair failed and we were unable to recover it. 00:27:16.400 [2024-11-20 07:23:20.785597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.400 [2024-11-20 07:23:20.785629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.400 qpair failed and we were unable to recover it. 00:27:16.400 [2024-11-20 07:23:20.785896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.400 [2024-11-20 07:23:20.785929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.400 qpair failed and we were unable to recover it. 00:27:16.400 [2024-11-20 07:23:20.786194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.400 [2024-11-20 07:23:20.786228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.400 qpair failed and we were unable to recover it. 00:27:16.400 [2024-11-20 07:23:20.786370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.400 [2024-11-20 07:23:20.786403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.400 qpair failed and we were unable to recover it. 00:27:16.400 [2024-11-20 07:23:20.786735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.400 [2024-11-20 07:23:20.786768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.400 qpair failed and we were unable to recover it. 00:27:16.400 [2024-11-20 07:23:20.787032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.400 [2024-11-20 07:23:20.787067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.400 qpair failed and we were unable to recover it. 00:27:16.400 [2024-11-20 07:23:20.787263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.400 [2024-11-20 07:23:20.787295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.400 qpair failed and we were unable to recover it. 00:27:16.400 [2024-11-20 07:23:20.787494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.400 [2024-11-20 07:23:20.787527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.400 qpair failed and we were unable to recover it. 00:27:16.400 [2024-11-20 07:23:20.787803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.400 [2024-11-20 07:23:20.787836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.400 qpair failed and we were unable to recover it. 00:27:16.400 [2024-11-20 07:23:20.788118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.400 [2024-11-20 07:23:20.788153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.400 qpair failed and we were unable to recover it. 00:27:16.400 [2024-11-20 07:23:20.788281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.400 [2024-11-20 07:23:20.788314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.400 qpair failed and we were unable to recover it. 00:27:16.400 [2024-11-20 07:23:20.788506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.400 [2024-11-20 07:23:20.788546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.400 qpair failed and we were unable to recover it. 00:27:16.400 [2024-11-20 07:23:20.788803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.400 [2024-11-20 07:23:20.788835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.400 qpair failed and we were unable to recover it. 00:27:16.400 [2024-11-20 07:23:20.789037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.400 [2024-11-20 07:23:20.789072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.400 qpair failed and we were unable to recover it. 00:27:16.400 [2024-11-20 07:23:20.789266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.400 [2024-11-20 07:23:20.789299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.400 qpair failed and we were unable to recover it. 00:27:16.400 [2024-11-20 07:23:20.789500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.400 [2024-11-20 07:23:20.789534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.400 qpair failed and we were unable to recover it. 00:27:16.400 [2024-11-20 07:23:20.789839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.400 [2024-11-20 07:23:20.789873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.400 qpair failed and we were unable to recover it. 00:27:16.400 [2024-11-20 07:23:20.790002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.400 [2024-11-20 07:23:20.790037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.400 qpair failed and we were unable to recover it. 00:27:16.400 [2024-11-20 07:23:20.790315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.400 [2024-11-20 07:23:20.790348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.400 qpair failed and we were unable to recover it. 00:27:16.400 [2024-11-20 07:23:20.790645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.400 [2024-11-20 07:23:20.790679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.400 qpair failed and we were unable to recover it. 00:27:16.401 [2024-11-20 07:23:20.790880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.401 [2024-11-20 07:23:20.790912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.401 qpair failed and we were unable to recover it. 00:27:16.401 [2024-11-20 07:23:20.791133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.401 [2024-11-20 07:23:20.791167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.401 qpair failed and we were unable to recover it. 00:27:16.401 [2024-11-20 07:23:20.791394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.401 [2024-11-20 07:23:20.791427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.401 qpair failed and we were unable to recover it. 00:27:16.401 [2024-11-20 07:23:20.791730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.401 [2024-11-20 07:23:20.791762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.401 qpair failed and we were unable to recover it. 00:27:16.401 [2024-11-20 07:23:20.792026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.401 [2024-11-20 07:23:20.792061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.401 qpair failed and we were unable to recover it. 00:27:16.401 [2024-11-20 07:23:20.792313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.401 [2024-11-20 07:23:20.792347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.401 qpair failed and we were unable to recover it. 00:27:16.401 [2024-11-20 07:23:20.792547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.401 [2024-11-20 07:23:20.792580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.401 qpair failed and we were unable to recover it. 00:27:16.401 [2024-11-20 07:23:20.792832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.401 [2024-11-20 07:23:20.792864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.401 qpair failed and we were unable to recover it. 00:27:16.401 [2024-11-20 07:23:20.793141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.401 [2024-11-20 07:23:20.793176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.401 qpair failed and we were unable to recover it. 00:27:16.401 [2024-11-20 07:23:20.793385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.401 [2024-11-20 07:23:20.793417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.401 qpair failed and we were unable to recover it. 00:27:16.401 [2024-11-20 07:23:20.793637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.401 [2024-11-20 07:23:20.793670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.401 qpair failed and we were unable to recover it. 00:27:16.401 [2024-11-20 07:23:20.793968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.401 [2024-11-20 07:23:20.794002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.401 qpair failed and we were unable to recover it. 00:27:16.401 [2024-11-20 07:23:20.794224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.401 [2024-11-20 07:23:20.794257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.401 qpair failed and we were unable to recover it. 00:27:16.401 [2024-11-20 07:23:20.794403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.401 [2024-11-20 07:23:20.794436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.401 qpair failed and we were unable to recover it. 00:27:16.401 [2024-11-20 07:23:20.794691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.401 [2024-11-20 07:23:20.794724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.401 qpair failed and we were unable to recover it. 00:27:16.401 [2024-11-20 07:23:20.794920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.401 [2024-11-20 07:23:20.794979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.401 qpair failed and we were unable to recover it. 00:27:16.401 [2024-11-20 07:23:20.795211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.401 [2024-11-20 07:23:20.795244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.401 qpair failed and we were unable to recover it. 00:27:16.401 [2024-11-20 07:23:20.795445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.401 [2024-11-20 07:23:20.795478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.401 qpair failed and we were unable to recover it. 00:27:16.401 [2024-11-20 07:23:20.795782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.401 [2024-11-20 07:23:20.795815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.401 qpair failed and we were unable to recover it. 00:27:16.401 [2024-11-20 07:23:20.795974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.401 [2024-11-20 07:23:20.796009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.401 qpair failed and we were unable to recover it. 00:27:16.401 [2024-11-20 07:23:20.796304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.401 [2024-11-20 07:23:20.796337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.401 qpair failed and we were unable to recover it. 00:27:16.401 [2024-11-20 07:23:20.796487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.401 [2024-11-20 07:23:20.796518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.401 qpair failed and we were unable to recover it. 00:27:16.401 [2024-11-20 07:23:20.796723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.401 [2024-11-20 07:23:20.796755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.401 qpair failed and we were unable to recover it. 00:27:16.401 [2024-11-20 07:23:20.797036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.401 [2024-11-20 07:23:20.797069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.401 qpair failed and we were unable to recover it. 00:27:16.401 [2024-11-20 07:23:20.797275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.401 [2024-11-20 07:23:20.797307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.401 qpair failed and we were unable to recover it. 00:27:16.401 [2024-11-20 07:23:20.797590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.401 [2024-11-20 07:23:20.797623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.401 qpair failed and we were unable to recover it. 00:27:16.401 [2024-11-20 07:23:20.797823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.401 [2024-11-20 07:23:20.797854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.401 qpair failed and we were unable to recover it. 00:27:16.401 [2024-11-20 07:23:20.798103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.401 [2024-11-20 07:23:20.798138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.401 qpair failed and we were unable to recover it. 00:27:16.401 [2024-11-20 07:23:20.798297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.401 [2024-11-20 07:23:20.798329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.401 qpair failed and we were unable to recover it. 00:27:16.401 [2024-11-20 07:23:20.798480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.401 [2024-11-20 07:23:20.798511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.401 qpair failed and we were unable to recover it. 00:27:16.401 [2024-11-20 07:23:20.798788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.401 [2024-11-20 07:23:20.798821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.401 qpair failed and we were unable to recover it. 00:27:16.401 [2024-11-20 07:23:20.799092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.402 [2024-11-20 07:23:20.799132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.402 qpair failed and we were unable to recover it. 00:27:16.402 [2024-11-20 07:23:20.799290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.402 [2024-11-20 07:23:20.799323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.402 qpair failed and we were unable to recover it. 00:27:16.402 [2024-11-20 07:23:20.799626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.402 [2024-11-20 07:23:20.799659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.402 qpair failed and we were unable to recover it. 00:27:16.402 [2024-11-20 07:23:20.799801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.402 [2024-11-20 07:23:20.799834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.402 qpair failed and we were unable to recover it. 00:27:16.402 [2024-11-20 07:23:20.800028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.402 [2024-11-20 07:23:20.800062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.402 qpair failed and we were unable to recover it. 00:27:16.402 [2024-11-20 07:23:20.800261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.402 [2024-11-20 07:23:20.800295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.402 qpair failed and we were unable to recover it. 00:27:16.402 [2024-11-20 07:23:20.800444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.402 [2024-11-20 07:23:20.800477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.402 qpair failed and we were unable to recover it. 00:27:16.402 [2024-11-20 07:23:20.800681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.402 [2024-11-20 07:23:20.800714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.402 qpair failed and we were unable to recover it. 00:27:16.402 [2024-11-20 07:23:20.800931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.402 [2024-11-20 07:23:20.800972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.402 qpair failed and we were unable to recover it. 00:27:16.402 [2024-11-20 07:23:20.801121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.402 [2024-11-20 07:23:20.801153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.402 qpair failed and we were unable to recover it. 00:27:16.402 [2024-11-20 07:23:20.801356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.402 [2024-11-20 07:23:20.801388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.402 qpair failed and we were unable to recover it. 00:27:16.402 [2024-11-20 07:23:20.801624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.402 [2024-11-20 07:23:20.801657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.402 qpair failed and we were unable to recover it. 00:27:16.402 [2024-11-20 07:23:20.801801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.402 [2024-11-20 07:23:20.801833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.402 qpair failed and we were unable to recover it. 00:27:16.402 [2024-11-20 07:23:20.802036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.402 [2024-11-20 07:23:20.802069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.402 qpair failed and we were unable to recover it. 00:27:16.402 [2024-11-20 07:23:20.802284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.402 [2024-11-20 07:23:20.802316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.402 qpair failed and we were unable to recover it. 00:27:16.402 [2024-11-20 07:23:20.802529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.402 [2024-11-20 07:23:20.802561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.402 qpair failed and we were unable to recover it. 00:27:16.402 [2024-11-20 07:23:20.802749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.402 [2024-11-20 07:23:20.802782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.402 qpair failed and we were unable to recover it. 00:27:16.402 [2024-11-20 07:23:20.803038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.402 [2024-11-20 07:23:20.803071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.402 qpair failed and we were unable to recover it. 00:27:16.402 [2024-11-20 07:23:20.803200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.402 [2024-11-20 07:23:20.803233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.402 qpair failed and we were unable to recover it. 00:27:16.402 [2024-11-20 07:23:20.803453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.402 [2024-11-20 07:23:20.803485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.402 qpair failed and we were unable to recover it. 00:27:16.402 [2024-11-20 07:23:20.803814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.402 [2024-11-20 07:23:20.803847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.402 qpair failed and we were unable to recover it. 00:27:16.402 [2024-11-20 07:23:20.804143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.402 [2024-11-20 07:23:20.804176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.402 qpair failed and we were unable to recover it. 00:27:16.402 [2024-11-20 07:23:20.804383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.402 [2024-11-20 07:23:20.804415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.402 qpair failed and we were unable to recover it. 00:27:16.402 [2024-11-20 07:23:20.804628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.402 [2024-11-20 07:23:20.804660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.402 qpair failed and we were unable to recover it. 00:27:16.403 [2024-11-20 07:23:20.804868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.403 [2024-11-20 07:23:20.804901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.403 qpair failed and we were unable to recover it. 00:27:16.403 [2024-11-20 07:23:20.805096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.403 [2024-11-20 07:23:20.805129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.403 qpair failed and we were unable to recover it. 00:27:16.403 [2024-11-20 07:23:20.805337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.403 [2024-11-20 07:23:20.805370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.403 qpair failed and we were unable to recover it. 00:27:16.403 [2024-11-20 07:23:20.805648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.403 [2024-11-20 07:23:20.805723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.403 qpair failed and we were unable to recover it. 00:27:16.403 [2024-11-20 07:23:20.805969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.403 [2024-11-20 07:23:20.806008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.403 qpair failed and we were unable to recover it. 00:27:16.403 [2024-11-20 07:23:20.806136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.403 [2024-11-20 07:23:20.806169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.403 qpair failed and we were unable to recover it. 00:27:16.403 [2024-11-20 07:23:20.806439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.403 [2024-11-20 07:23:20.806471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.403 qpair failed and we were unable to recover it. 00:27:16.403 [2024-11-20 07:23:20.806776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.403 [2024-11-20 07:23:20.806810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.403 qpair failed and we were unable to recover it. 00:27:16.403 [2024-11-20 07:23:20.807120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.403 [2024-11-20 07:23:20.807154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.403 qpair failed and we were unable to recover it. 00:27:16.403 [2024-11-20 07:23:20.807427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.403 [2024-11-20 07:23:20.807460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.403 qpair failed and we were unable to recover it. 00:27:16.403 [2024-11-20 07:23:20.807613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.403 [2024-11-20 07:23:20.807644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.403 qpair failed and we were unable to recover it. 00:27:16.403 [2024-11-20 07:23:20.807941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.403 [2024-11-20 07:23:20.807986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.403 qpair failed and we were unable to recover it. 00:27:16.403 [2024-11-20 07:23:20.808190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.403 [2024-11-20 07:23:20.808223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.403 qpair failed and we were unable to recover it. 00:27:16.403 [2024-11-20 07:23:20.808496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.403 [2024-11-20 07:23:20.808529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.403 qpair failed and we were unable to recover it. 00:27:16.403 [2024-11-20 07:23:20.808776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.403 [2024-11-20 07:23:20.808810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.403 qpair failed and we were unable to recover it. 00:27:16.403 [2024-11-20 07:23:20.809082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.403 [2024-11-20 07:23:20.809116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.403 qpair failed and we were unable to recover it. 00:27:16.403 [2024-11-20 07:23:20.809256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.403 [2024-11-20 07:23:20.809299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.403 qpair failed and we were unable to recover it. 00:27:16.403 [2024-11-20 07:23:20.809505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.403 [2024-11-20 07:23:20.809538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.403 qpair failed and we were unable to recover it. 00:27:16.403 [2024-11-20 07:23:20.809814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.403 [2024-11-20 07:23:20.809845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.403 qpair failed and we were unable to recover it. 00:27:16.403 [2024-11-20 07:23:20.810056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.403 [2024-11-20 07:23:20.810090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.403 qpair failed and we were unable to recover it. 00:27:16.403 [2024-11-20 07:23:20.810238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.403 [2024-11-20 07:23:20.810270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.403 qpair failed and we were unable to recover it. 00:27:16.403 [2024-11-20 07:23:20.810416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.403 [2024-11-20 07:23:20.810450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.403 qpair failed and we were unable to recover it. 00:27:16.403 [2024-11-20 07:23:20.810733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.403 [2024-11-20 07:23:20.810766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.403 qpair failed and we were unable to recover it. 00:27:16.403 [2024-11-20 07:23:20.811062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.403 [2024-11-20 07:23:20.811096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.403 qpair failed and we were unable to recover it. 00:27:16.403 [2024-11-20 07:23:20.811308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.403 [2024-11-20 07:23:20.811341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.403 qpair failed and we were unable to recover it. 00:27:16.403 [2024-11-20 07:23:20.811537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.403 [2024-11-20 07:23:20.811569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.403 qpair failed and we were unable to recover it. 00:27:16.403 [2024-11-20 07:23:20.811843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.403 [2024-11-20 07:23:20.811875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.403 qpair failed and we were unable to recover it. 00:27:16.403 [2024-11-20 07:23:20.812102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.403 [2024-11-20 07:23:20.812135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.403 qpair failed and we were unable to recover it. 00:27:16.403 [2024-11-20 07:23:20.812340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.403 [2024-11-20 07:23:20.812373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.403 qpair failed and we were unable to recover it. 00:27:16.403 [2024-11-20 07:23:20.812579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.403 [2024-11-20 07:23:20.812611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.403 qpair failed and we were unable to recover it. 00:27:16.403 [2024-11-20 07:23:20.812821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.403 [2024-11-20 07:23:20.812854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.403 qpair failed and we were unable to recover it. 00:27:16.403 [2024-11-20 07:23:20.813174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.404 [2024-11-20 07:23:20.813208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.404 qpair failed and we were unable to recover it. 00:27:16.404 [2024-11-20 07:23:20.813416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.404 [2024-11-20 07:23:20.813448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.404 qpair failed and we were unable to recover it. 00:27:16.404 [2024-11-20 07:23:20.813701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.404 [2024-11-20 07:23:20.813733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.404 qpair failed and we were unable to recover it. 00:27:16.404 [2024-11-20 07:23:20.814004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.404 [2024-11-20 07:23:20.814039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.404 qpair failed and we were unable to recover it. 00:27:16.404 [2024-11-20 07:23:20.814214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.404 [2024-11-20 07:23:20.814247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.404 qpair failed and we were unable to recover it. 00:27:16.404 [2024-11-20 07:23:20.814383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.404 [2024-11-20 07:23:20.814416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.404 qpair failed and we were unable to recover it. 00:27:16.404 [2024-11-20 07:23:20.814726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.404 [2024-11-20 07:23:20.814758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.404 qpair failed and we were unable to recover it. 00:27:16.404 [2024-11-20 07:23:20.815050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.404 [2024-11-20 07:23:20.815084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.404 qpair failed and we were unable to recover it. 00:27:16.404 [2024-11-20 07:23:20.815273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.404 [2024-11-20 07:23:20.815305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.404 qpair failed and we were unable to recover it. 00:27:16.404 [2024-11-20 07:23:20.815501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.404 [2024-11-20 07:23:20.815534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.404 qpair failed and we were unable to recover it. 00:27:16.404 [2024-11-20 07:23:20.815796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.404 [2024-11-20 07:23:20.815828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.404 qpair failed and we were unable to recover it. 00:27:16.404 [2024-11-20 07:23:20.816051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.404 [2024-11-20 07:23:20.816084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.404 qpair failed and we were unable to recover it. 00:27:16.404 [2024-11-20 07:23:20.816237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.404 [2024-11-20 07:23:20.816278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.404 qpair failed and we were unable to recover it. 00:27:16.404 [2024-11-20 07:23:20.816480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.404 [2024-11-20 07:23:20.816512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.404 qpair failed and we were unable to recover it. 00:27:16.404 [2024-11-20 07:23:20.816741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.404 [2024-11-20 07:23:20.816773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.404 qpair failed and we were unable to recover it. 00:27:16.404 [2024-11-20 07:23:20.816976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.404 [2024-11-20 07:23:20.817010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.404 qpair failed and we were unable to recover it. 00:27:16.404 [2024-11-20 07:23:20.817225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.404 [2024-11-20 07:23:20.817257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.404 qpair failed and we were unable to recover it. 00:27:16.404 [2024-11-20 07:23:20.817459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.404 [2024-11-20 07:23:20.817491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.404 qpair failed and we were unable to recover it. 00:27:16.404 [2024-11-20 07:23:20.817764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.404 [2024-11-20 07:23:20.817797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.404 qpair failed and we were unable to recover it. 00:27:16.404 [2024-11-20 07:23:20.817997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.404 [2024-11-20 07:23:20.818031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.404 qpair failed and we were unable to recover it. 00:27:16.404 [2024-11-20 07:23:20.818231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.404 [2024-11-20 07:23:20.818263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.404 qpair failed and we were unable to recover it. 00:27:16.404 [2024-11-20 07:23:20.818418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.404 [2024-11-20 07:23:20.818450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.404 qpair failed and we were unable to recover it. 00:27:16.404 [2024-11-20 07:23:20.818772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.404 [2024-11-20 07:23:20.818804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.404 qpair failed and we were unable to recover it. 00:27:16.404 [2024-11-20 07:23:20.819007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.404 [2024-11-20 07:23:20.819039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.404 qpair failed and we were unable to recover it. 00:27:16.404 [2024-11-20 07:23:20.819337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.404 [2024-11-20 07:23:20.819368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.404 qpair failed and we were unable to recover it. 00:27:16.404 [2024-11-20 07:23:20.819563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.404 [2024-11-20 07:23:20.819595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.404 qpair failed and we were unable to recover it. 00:27:16.404 [2024-11-20 07:23:20.819804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.404 [2024-11-20 07:23:20.819837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.404 qpair failed and we were unable to recover it. 00:27:16.404 [2024-11-20 07:23:20.820117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.404 [2024-11-20 07:23:20.820149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.404 qpair failed and we were unable to recover it. 00:27:16.404 [2024-11-20 07:23:20.820358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.405 [2024-11-20 07:23:20.820390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.405 qpair failed and we were unable to recover it. 00:27:16.405 [2024-11-20 07:23:20.820510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.405 [2024-11-20 07:23:20.820542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.405 qpair failed and we were unable to recover it. 00:27:16.405 [2024-11-20 07:23:20.820853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.405 [2024-11-20 07:23:20.820884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.405 qpair failed and we were unable to recover it. 00:27:16.405 [2024-11-20 07:23:20.821210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.405 [2024-11-20 07:23:20.821244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.405 qpair failed and we were unable to recover it. 00:27:16.405 [2024-11-20 07:23:20.821496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.405 [2024-11-20 07:23:20.821529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.405 qpair failed and we were unable to recover it. 00:27:16.405 [2024-11-20 07:23:20.821742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.405 [2024-11-20 07:23:20.821773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.405 qpair failed and we were unable to recover it. 00:27:16.405 [2024-11-20 07:23:20.821980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.405 [2024-11-20 07:23:20.822013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.405 qpair failed and we were unable to recover it. 00:27:16.405 [2024-11-20 07:23:20.822139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.405 [2024-11-20 07:23:20.822172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.405 qpair failed and we were unable to recover it. 00:27:16.405 [2024-11-20 07:23:20.822424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.405 [2024-11-20 07:23:20.822455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.405 qpair failed and we were unable to recover it. 00:27:16.405 [2024-11-20 07:23:20.822652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.405 [2024-11-20 07:23:20.822684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.405 qpair failed and we were unable to recover it. 00:27:16.405 [2024-11-20 07:23:20.822883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.405 [2024-11-20 07:23:20.822914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.405 qpair failed and we were unable to recover it. 00:27:16.405 [2024-11-20 07:23:20.823137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.405 [2024-11-20 07:23:20.823171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.405 qpair failed and we were unable to recover it. 00:27:16.405 [2024-11-20 07:23:20.823374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.405 [2024-11-20 07:23:20.823407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.405 qpair failed and we were unable to recover it. 00:27:16.405 [2024-11-20 07:23:20.823653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.405 [2024-11-20 07:23:20.823684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.405 qpair failed and we were unable to recover it. 00:27:16.405 [2024-11-20 07:23:20.823880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.405 [2024-11-20 07:23:20.823913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.405 qpair failed and we were unable to recover it. 00:27:16.405 [2024-11-20 07:23:20.824151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.405 [2024-11-20 07:23:20.824185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.405 qpair failed and we were unable to recover it. 00:27:16.405 [2024-11-20 07:23:20.824386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.405 [2024-11-20 07:23:20.824419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.405 qpair failed and we were unable to recover it. 00:27:16.405 [2024-11-20 07:23:20.824690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.405 [2024-11-20 07:23:20.824722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.405 qpair failed and we were unable to recover it. 00:27:16.405 [2024-11-20 07:23:20.824912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.405 [2024-11-20 07:23:20.824945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.405 qpair failed and we were unable to recover it. 00:27:16.405 [2024-11-20 07:23:20.825209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.405 [2024-11-20 07:23:20.825242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.405 qpair failed and we were unable to recover it. 00:27:16.405 [2024-11-20 07:23:20.825441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.405 [2024-11-20 07:23:20.825472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.405 qpair failed and we were unable to recover it. 00:27:16.405 [2024-11-20 07:23:20.825667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.405 [2024-11-20 07:23:20.825699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.405 qpair failed and we were unable to recover it. 00:27:16.405 [2024-11-20 07:23:20.825958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.405 [2024-11-20 07:23:20.825993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.405 qpair failed and we were unable to recover it. 00:27:16.405 [2024-11-20 07:23:20.826193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.405 [2024-11-20 07:23:20.826225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.405 qpair failed and we were unable to recover it. 00:27:16.405 [2024-11-20 07:23:20.826497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.405 [2024-11-20 07:23:20.826538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.405 qpair failed and we were unable to recover it. 00:27:16.405 [2024-11-20 07:23:20.826724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.405 [2024-11-20 07:23:20.826757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.405 qpair failed and we were unable to recover it. 00:27:16.405 [2024-11-20 07:23:20.826968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.405 [2024-11-20 07:23:20.827002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.405 qpair failed and we were unable to recover it. 00:27:16.405 [2024-11-20 07:23:20.827291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.405 [2024-11-20 07:23:20.827322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.405 qpair failed and we were unable to recover it. 00:27:16.405 [2024-11-20 07:23:20.827549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.405 [2024-11-20 07:23:20.827581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.405 qpair failed and we were unable to recover it. 00:27:16.405 [2024-11-20 07:23:20.827839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.405 [2024-11-20 07:23:20.827871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.405 qpair failed and we were unable to recover it. 00:27:16.405 [2024-11-20 07:23:20.828084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.405 [2024-11-20 07:23:20.828119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.405 qpair failed and we were unable to recover it. 00:27:16.406 [2024-11-20 07:23:20.828341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.406 [2024-11-20 07:23:20.828372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.406 qpair failed and we were unable to recover it. 00:27:16.406 [2024-11-20 07:23:20.828526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.406 [2024-11-20 07:23:20.828557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.406 qpair failed and we were unable to recover it. 00:27:16.406 [2024-11-20 07:23:20.828749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.406 [2024-11-20 07:23:20.828781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.406 qpair failed and we were unable to recover it. 00:27:16.406 [2024-11-20 07:23:20.829035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.406 [2024-11-20 07:23:20.829068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.406 qpair failed and we were unable to recover it. 00:27:16.406 [2024-11-20 07:23:20.829278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.406 [2024-11-20 07:23:20.829310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.406 qpair failed and we were unable to recover it. 00:27:16.406 [2024-11-20 07:23:20.829492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.406 [2024-11-20 07:23:20.829524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.406 qpair failed and we were unable to recover it. 00:27:16.406 [2024-11-20 07:23:20.829798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.406 [2024-11-20 07:23:20.829830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.406 qpair failed and we were unable to recover it. 00:27:16.406 [2024-11-20 07:23:20.830057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.406 [2024-11-20 07:23:20.830091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.406 qpair failed and we were unable to recover it. 00:27:16.406 [2024-11-20 07:23:20.830214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.406 [2024-11-20 07:23:20.830246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.406 qpair failed and we were unable to recover it. 00:27:16.406 [2024-11-20 07:23:20.830447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.406 [2024-11-20 07:23:20.830479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.406 qpair failed and we were unable to recover it. 00:27:16.406 [2024-11-20 07:23:20.830804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.406 [2024-11-20 07:23:20.830836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.406 qpair failed and we were unable to recover it. 00:27:16.406 [2024-11-20 07:23:20.831097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.406 [2024-11-20 07:23:20.831130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.406 qpair failed and we were unable to recover it. 00:27:16.406 [2024-11-20 07:23:20.831382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.406 [2024-11-20 07:23:20.831414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.406 qpair failed and we were unable to recover it. 00:27:16.406 [2024-11-20 07:23:20.831685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.406 [2024-11-20 07:23:20.831717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.406 qpair failed and we were unable to recover it. 00:27:16.406 [2024-11-20 07:23:20.831929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.406 [2024-11-20 07:23:20.831971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.406 qpair failed and we were unable to recover it. 00:27:16.406 [2024-11-20 07:23:20.832177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.406 [2024-11-20 07:23:20.832208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.406 qpair failed and we were unable to recover it. 00:27:16.406 [2024-11-20 07:23:20.832407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.406 [2024-11-20 07:23:20.832439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.406 qpair failed and we were unable to recover it. 00:27:16.406 [2024-11-20 07:23:20.832727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.406 [2024-11-20 07:23:20.832758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.406 qpair failed and we were unable to recover it. 00:27:16.406 [2024-11-20 07:23:20.833034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.406 [2024-11-20 07:23:20.833068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.406 qpair failed and we were unable to recover it. 00:27:16.406 [2024-11-20 07:23:20.833269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.406 [2024-11-20 07:23:20.833301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.406 qpair failed and we were unable to recover it. 00:27:16.406 [2024-11-20 07:23:20.833453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.406 [2024-11-20 07:23:20.833485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.406 qpair failed and we were unable to recover it. 00:27:16.406 [2024-11-20 07:23:20.833776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.406 [2024-11-20 07:23:20.833809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.406 qpair failed and we were unable to recover it. 00:27:16.406 [2024-11-20 07:23:20.834025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.406 [2024-11-20 07:23:20.834059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.406 qpair failed and we were unable to recover it. 00:27:16.406 [2024-11-20 07:23:20.834360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.406 [2024-11-20 07:23:20.834392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.406 qpair failed and we were unable to recover it. 00:27:16.406 [2024-11-20 07:23:20.834647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.406 [2024-11-20 07:23:20.834680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.406 qpair failed and we were unable to recover it. 00:27:16.406 [2024-11-20 07:23:20.834998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.406 [2024-11-20 07:23:20.835031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.406 qpair failed and we were unable to recover it. 00:27:16.406 [2024-11-20 07:23:20.835239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.406 [2024-11-20 07:23:20.835271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.406 qpair failed and we were unable to recover it. 00:27:16.406 [2024-11-20 07:23:20.835479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.406 [2024-11-20 07:23:20.835511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.407 qpair failed and we were unable to recover it. 00:27:16.407 [2024-11-20 07:23:20.835818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.407 [2024-11-20 07:23:20.835849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.407 qpair failed and we were unable to recover it. 00:27:16.407 [2024-11-20 07:23:20.835977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.407 [2024-11-20 07:23:20.836011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.407 qpair failed and we were unable to recover it. 00:27:16.407 [2024-11-20 07:23:20.836285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.407 [2024-11-20 07:23:20.836317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.407 qpair failed and we were unable to recover it. 00:27:16.407 [2024-11-20 07:23:20.836610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.407 [2024-11-20 07:23:20.836642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.407 qpair failed and we were unable to recover it. 00:27:16.407 [2024-11-20 07:23:20.836861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.407 [2024-11-20 07:23:20.836893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.407 qpair failed and we were unable to recover it. 00:27:16.407 [2024-11-20 07:23:20.837117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.407 [2024-11-20 07:23:20.837156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.407 qpair failed and we were unable to recover it. 00:27:16.407 [2024-11-20 07:23:20.837294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.407 [2024-11-20 07:23:20.837327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.407 qpair failed and we were unable to recover it. 00:27:16.407 [2024-11-20 07:23:20.837632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.407 [2024-11-20 07:23:20.837664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.407 qpair failed and we were unable to recover it. 00:27:16.407 [2024-11-20 07:23:20.837968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.407 [2024-11-20 07:23:20.838003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.407 qpair failed and we were unable to recover it. 00:27:16.407 [2024-11-20 07:23:20.838114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.407 [2024-11-20 07:23:20.838146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.407 qpair failed and we were unable to recover it. 00:27:16.407 [2024-11-20 07:23:20.838349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.407 [2024-11-20 07:23:20.838381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.407 qpair failed and we were unable to recover it. 00:27:16.407 [2024-11-20 07:23:20.838574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.407 [2024-11-20 07:23:20.838606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.407 qpair failed and we were unable to recover it. 00:27:16.407 [2024-11-20 07:23:20.838804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.407 [2024-11-20 07:23:20.838836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.407 qpair failed and we were unable to recover it. 00:27:16.407 [2024-11-20 07:23:20.839121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.407 [2024-11-20 07:23:20.839156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.407 qpair failed and we were unable to recover it. 00:27:16.407 [2024-11-20 07:23:20.839384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.407 [2024-11-20 07:23:20.839416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.407 qpair failed and we were unable to recover it. 00:27:16.407 [2024-11-20 07:23:20.839695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.407 [2024-11-20 07:23:20.839728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.407 qpair failed and we were unable to recover it. 00:27:16.407 [2024-11-20 07:23:20.840007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.407 [2024-11-20 07:23:20.840041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.407 qpair failed and we were unable to recover it. 00:27:16.407 [2024-11-20 07:23:20.840223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.407 [2024-11-20 07:23:20.840254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.407 qpair failed and we were unable to recover it. 00:27:16.407 [2024-11-20 07:23:20.840474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.407 [2024-11-20 07:23:20.840506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.407 qpair failed and we were unable to recover it. 00:27:16.407 [2024-11-20 07:23:20.840765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.407 [2024-11-20 07:23:20.840797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.407 qpair failed and we were unable to recover it. 00:27:16.407 [2024-11-20 07:23:20.840983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.407 [2024-11-20 07:23:20.841017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.407 qpair failed and we were unable to recover it. 00:27:16.407 [2024-11-20 07:23:20.841297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.407 [2024-11-20 07:23:20.841330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.407 qpair failed and we were unable to recover it. 00:27:16.407 [2024-11-20 07:23:20.841615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.407 [2024-11-20 07:23:20.841648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.407 qpair failed and we were unable to recover it. 00:27:16.407 [2024-11-20 07:23:20.841839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.407 [2024-11-20 07:23:20.841871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.407 qpair failed and we were unable to recover it. 00:27:16.407 [2024-11-20 07:23:20.842077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.407 [2024-11-20 07:23:20.842111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.407 qpair failed and we were unable to recover it. 00:27:16.407 [2024-11-20 07:23:20.842325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.407 [2024-11-20 07:23:20.842357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.407 qpair failed and we were unable to recover it. 00:27:16.407 [2024-11-20 07:23:20.842588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.407 [2024-11-20 07:23:20.842620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.407 qpair failed and we were unable to recover it. 00:27:16.407 [2024-11-20 07:23:20.842900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.407 [2024-11-20 07:23:20.842933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.407 qpair failed and we were unable to recover it. 00:27:16.407 [2024-11-20 07:23:20.843221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.407 [2024-11-20 07:23:20.843253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.407 qpair failed and we were unable to recover it. 00:27:16.407 [2024-11-20 07:23:20.843461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.407 [2024-11-20 07:23:20.843493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.407 qpair failed and we were unable to recover it. 00:27:16.407 [2024-11-20 07:23:20.843774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.408 [2024-11-20 07:23:20.843806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.408 qpair failed and we were unable to recover it. 00:27:16.408 [2024-11-20 07:23:20.843989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.408 [2024-11-20 07:23:20.844023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.408 qpair failed and we were unable to recover it. 00:27:16.408 [2024-11-20 07:23:20.844292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.408 [2024-11-20 07:23:20.844325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.408 qpair failed and we were unable to recover it. 00:27:16.408 [2024-11-20 07:23:20.844518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.408 [2024-11-20 07:23:20.844549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.408 qpair failed and we were unable to recover it. 00:27:16.408 [2024-11-20 07:23:20.844825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.408 [2024-11-20 07:23:20.844857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.408 qpair failed and we were unable to recover it. 00:27:16.408 [2024-11-20 07:23:20.845148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.408 [2024-11-20 07:23:20.845182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.408 qpair failed and we were unable to recover it. 00:27:16.408 [2024-11-20 07:23:20.845405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.408 [2024-11-20 07:23:20.845436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.408 qpair failed and we were unable to recover it. 00:27:16.408 [2024-11-20 07:23:20.845735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.408 [2024-11-20 07:23:20.845767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.408 qpair failed and we were unable to recover it. 00:27:16.408 [2024-11-20 07:23:20.845968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.408 [2024-11-20 07:23:20.846002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.408 qpair failed and we were unable to recover it. 00:27:16.408 [2024-11-20 07:23:20.846201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.408 [2024-11-20 07:23:20.846233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.408 qpair failed and we were unable to recover it. 00:27:16.408 [2024-11-20 07:23:20.846426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.408 [2024-11-20 07:23:20.846457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.408 qpair failed and we were unable to recover it. 00:27:16.408 [2024-11-20 07:23:20.846662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.408 [2024-11-20 07:23:20.846695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.408 qpair failed and we were unable to recover it. 00:27:16.408 [2024-11-20 07:23:20.846985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.408 [2024-11-20 07:23:20.847018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.408 qpair failed and we were unable to recover it. 00:27:16.408 [2024-11-20 07:23:20.847295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.408 [2024-11-20 07:23:20.847327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.408 qpair failed and we were unable to recover it. 00:27:16.408 [2024-11-20 07:23:20.847621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.408 [2024-11-20 07:23:20.847653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.408 qpair failed and we were unable to recover it. 00:27:16.408 [2024-11-20 07:23:20.847876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.408 [2024-11-20 07:23:20.847915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.408 qpair failed and we were unable to recover it. 00:27:16.408 [2024-11-20 07:23:20.848138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.408 [2024-11-20 07:23:20.848172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.408 qpair failed and we were unable to recover it. 00:27:16.408 [2024-11-20 07:23:20.848452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.408 [2024-11-20 07:23:20.848485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.408 qpair failed and we were unable to recover it. 00:27:16.408 [2024-11-20 07:23:20.848772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.408 [2024-11-20 07:23:20.848803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.408 qpair failed and we were unable to recover it. 00:27:16.408 [2024-11-20 07:23:20.849079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.408 [2024-11-20 07:23:20.849112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.408 qpair failed and we were unable to recover it. 00:27:16.408 [2024-11-20 07:23:20.849329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.408 [2024-11-20 07:23:20.849362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.408 qpair failed and we were unable to recover it. 00:27:16.408 [2024-11-20 07:23:20.849620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.408 [2024-11-20 07:23:20.849652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.408 qpair failed and we were unable to recover it. 00:27:16.408 [2024-11-20 07:23:20.849905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.408 [2024-11-20 07:23:20.849937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.408 qpair failed and we were unable to recover it. 00:27:16.408 [2024-11-20 07:23:20.850095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.408 [2024-11-20 07:23:20.850126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.408 qpair failed and we were unable to recover it. 00:27:16.409 [2024-11-20 07:23:20.850375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.409 [2024-11-20 07:23:20.850407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.409 qpair failed and we were unable to recover it. 00:27:16.409 [2024-11-20 07:23:20.850674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.409 [2024-11-20 07:23:20.850705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.409 qpair failed and we were unable to recover it. 00:27:16.409 [2024-11-20 07:23:20.850980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.409 [2024-11-20 07:23:20.851013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.409 qpair failed and we were unable to recover it. 00:27:16.409 [2024-11-20 07:23:20.851311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.409 [2024-11-20 07:23:20.851345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.409 qpair failed and we were unable to recover it. 00:27:16.409 [2024-11-20 07:23:20.851540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.409 [2024-11-20 07:23:20.851571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.409 qpair failed and we were unable to recover it. 00:27:16.409 [2024-11-20 07:23:20.851798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.409 [2024-11-20 07:23:20.851829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.409 qpair failed and we were unable to recover it. 00:27:16.409 [2024-11-20 07:23:20.852023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.409 [2024-11-20 07:23:20.852056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.409 qpair failed and we were unable to recover it. 00:27:16.409 [2024-11-20 07:23:20.852311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.409 [2024-11-20 07:23:20.852345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.409 qpair failed and we were unable to recover it. 00:27:16.409 [2024-11-20 07:23:20.852643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.409 [2024-11-20 07:23:20.852673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.409 qpair failed and we were unable to recover it. 00:27:16.409 [2024-11-20 07:23:20.852855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.409 [2024-11-20 07:23:20.852887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.409 qpair failed and we were unable to recover it. 00:27:16.409 [2024-11-20 07:23:20.853118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.409 [2024-11-20 07:23:20.853151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.409 qpair failed and we were unable to recover it. 00:27:16.409 [2024-11-20 07:23:20.853426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.409 [2024-11-20 07:23:20.853458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.409 qpair failed and we were unable to recover it. 00:27:16.409 [2024-11-20 07:23:20.853581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.409 [2024-11-20 07:23:20.853612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.409 qpair failed and we were unable to recover it. 00:27:16.409 [2024-11-20 07:23:20.853883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.409 [2024-11-20 07:23:20.853916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.409 qpair failed and we were unable to recover it. 00:27:16.409 [2024-11-20 07:23:20.854176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.409 [2024-11-20 07:23:20.854208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.409 qpair failed and we were unable to recover it. 00:27:16.409 [2024-11-20 07:23:20.854466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.409 [2024-11-20 07:23:20.854497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.409 qpair failed and we were unable to recover it. 00:27:16.409 [2024-11-20 07:23:20.854776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.409 [2024-11-20 07:23:20.854808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.409 qpair failed and we were unable to recover it. 00:27:16.409 [2024-11-20 07:23:20.855006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.409 [2024-11-20 07:23:20.855042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.409 qpair failed and we were unable to recover it. 00:27:16.409 [2024-11-20 07:23:20.855326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.409 [2024-11-20 07:23:20.855358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.409 qpair failed and we were unable to recover it. 00:27:16.409 [2024-11-20 07:23:20.855637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.409 [2024-11-20 07:23:20.855669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.409 qpair failed and we were unable to recover it. 00:27:16.409 [2024-11-20 07:23:20.855898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.409 [2024-11-20 07:23:20.855930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.409 qpair failed and we were unable to recover it. 00:27:16.409 [2024-11-20 07:23:20.856195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.409 [2024-11-20 07:23:20.856227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.409 qpair failed and we were unable to recover it. 00:27:16.409 [2024-11-20 07:23:20.856426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.409 [2024-11-20 07:23:20.856457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.409 qpair failed and we were unable to recover it. 00:27:16.409 [2024-11-20 07:23:20.856759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.409 [2024-11-20 07:23:20.856791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.409 qpair failed and we were unable to recover it. 00:27:16.409 [2024-11-20 07:23:20.857060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.409 [2024-11-20 07:23:20.857094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.409 qpair failed and we were unable to recover it. 00:27:16.409 [2024-11-20 07:23:20.857364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.409 [2024-11-20 07:23:20.857396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.409 qpair failed and we were unable to recover it. 00:27:16.409 [2024-11-20 07:23:20.857603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.409 [2024-11-20 07:23:20.857634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.409 qpair failed and we were unable to recover it. 00:27:16.409 [2024-11-20 07:23:20.857829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.409 [2024-11-20 07:23:20.857862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.409 qpair failed and we were unable to recover it. 00:27:16.409 [2024-11-20 07:23:20.858123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.409 [2024-11-20 07:23:20.858155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.409 qpair failed and we were unable to recover it. 00:27:16.409 [2024-11-20 07:23:20.858477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.409 [2024-11-20 07:23:20.858509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.409 qpair failed and we were unable to recover it. 00:27:16.409 [2024-11-20 07:23:20.858780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.409 [2024-11-20 07:23:20.858812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.409 qpair failed and we were unable to recover it. 00:27:16.409 [2024-11-20 07:23:20.858991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.410 [2024-11-20 07:23:20.859032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.410 qpair failed and we were unable to recover it. 00:27:16.410 [2024-11-20 07:23:20.859288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.410 [2024-11-20 07:23:20.859321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.410 qpair failed and we were unable to recover it. 00:27:16.410 [2024-11-20 07:23:20.859570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.410 [2024-11-20 07:23:20.859602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.410 qpair failed and we were unable to recover it. 00:27:16.410 [2024-11-20 07:23:20.859865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.410 [2024-11-20 07:23:20.859896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.410 qpair failed and we were unable to recover it. 00:27:16.410 [2024-11-20 07:23:20.860147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.410 [2024-11-20 07:23:20.860180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.410 qpair failed and we were unable to recover it. 00:27:16.410 [2024-11-20 07:23:20.860310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.410 [2024-11-20 07:23:20.860341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.410 qpair failed and we were unable to recover it. 00:27:16.410 [2024-11-20 07:23:20.860562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.410 [2024-11-20 07:23:20.860593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.410 qpair failed and we were unable to recover it. 00:27:16.410 [2024-11-20 07:23:20.860815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.410 [2024-11-20 07:23:20.860846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.410 qpair failed and we were unable to recover it. 00:27:16.410 [2024-11-20 07:23:20.861097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.410 [2024-11-20 07:23:20.861130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.410 qpair failed and we were unable to recover it. 00:27:16.410 [2024-11-20 07:23:20.861433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.410 [2024-11-20 07:23:20.861464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.410 qpair failed and we were unable to recover it. 00:27:16.410 [2024-11-20 07:23:20.861729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.410 [2024-11-20 07:23:20.861761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.410 qpair failed and we were unable to recover it. 00:27:16.410 [2024-11-20 07:23:20.861940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.410 [2024-11-20 07:23:20.861980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.410 qpair failed and we were unable to recover it. 00:27:16.410 [2024-11-20 07:23:20.862197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.410 [2024-11-20 07:23:20.862229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.410 qpair failed and we were unable to recover it. 00:27:16.410 [2024-11-20 07:23:20.862508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.410 [2024-11-20 07:23:20.862540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.410 qpair failed and we were unable to recover it. 00:27:16.410 [2024-11-20 07:23:20.862671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.410 [2024-11-20 07:23:20.862702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.410 qpair failed and we were unable to recover it. 00:27:16.410 [2024-11-20 07:23:20.862893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.410 [2024-11-20 07:23:20.862925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.410 qpair failed and we were unable to recover it. 00:27:16.410 [2024-11-20 07:23:20.863162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.410 [2024-11-20 07:23:20.863195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.410 qpair failed and we were unable to recover it. 00:27:16.410 [2024-11-20 07:23:20.863476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.410 [2024-11-20 07:23:20.863508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.410 qpair failed and we were unable to recover it. 00:27:16.410 [2024-11-20 07:23:20.863792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.410 [2024-11-20 07:23:20.863823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.410 qpair failed and we were unable to recover it. 00:27:16.410 [2024-11-20 07:23:20.864006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.410 [2024-11-20 07:23:20.864040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.410 qpair failed and we were unable to recover it. 00:27:16.410 [2024-11-20 07:23:20.864244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.410 [2024-11-20 07:23:20.864275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.410 qpair failed and we were unable to recover it. 00:27:16.410 [2024-11-20 07:23:20.864474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.410 [2024-11-20 07:23:20.864506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.410 qpair failed and we were unable to recover it. 00:27:16.410 [2024-11-20 07:23:20.864779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.410 [2024-11-20 07:23:20.864810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.410 qpair failed and we were unable to recover it. 00:27:16.410 [2024-11-20 07:23:20.865003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.410 [2024-11-20 07:23:20.865036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.410 qpair failed and we were unable to recover it. 00:27:16.410 [2024-11-20 07:23:20.865222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.410 [2024-11-20 07:23:20.865254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.410 qpair failed and we were unable to recover it. 00:27:16.410 [2024-11-20 07:23:20.865476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.410 [2024-11-20 07:23:20.865507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.410 qpair failed and we were unable to recover it. 00:27:16.410 [2024-11-20 07:23:20.865759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.410 [2024-11-20 07:23:20.865790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.410 qpair failed and we were unable to recover it. 00:27:16.410 [2024-11-20 07:23:20.866021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.410 [2024-11-20 07:23:20.866055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.410 qpair failed and we were unable to recover it. 00:27:16.410 [2024-11-20 07:23:20.866331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.410 [2024-11-20 07:23:20.866362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.410 qpair failed and we were unable to recover it. 00:27:16.410 [2024-11-20 07:23:20.866573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.410 [2024-11-20 07:23:20.866605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.410 qpair failed and we were unable to recover it. 00:27:16.410 [2024-11-20 07:23:20.866850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.410 [2024-11-20 07:23:20.866882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.410 qpair failed and we were unable to recover it. 00:27:16.410 [2024-11-20 07:23:20.867160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.411 [2024-11-20 07:23:20.867193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.411 qpair failed and we were unable to recover it. 00:27:16.411 [2024-11-20 07:23:20.867474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.411 [2024-11-20 07:23:20.867505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.411 qpair failed and we were unable to recover it. 00:27:16.411 [2024-11-20 07:23:20.867771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.411 [2024-11-20 07:23:20.867804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.411 qpair failed and we were unable to recover it. 00:27:16.411 [2024-11-20 07:23:20.868100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.411 [2024-11-20 07:23:20.868133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.411 qpair failed and we were unable to recover it. 00:27:16.411 [2024-11-20 07:23:20.868401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.411 [2024-11-20 07:23:20.868433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.411 qpair failed and we were unable to recover it. 00:27:16.411 [2024-11-20 07:23:20.868753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.411 [2024-11-20 07:23:20.868784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.411 qpair failed and we were unable to recover it. 00:27:16.411 [2024-11-20 07:23:20.868987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.411 [2024-11-20 07:23:20.869021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.411 qpair failed and we were unable to recover it. 00:27:16.411 [2024-11-20 07:23:20.869300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.411 [2024-11-20 07:23:20.869332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.411 qpair failed and we were unable to recover it. 00:27:16.411 [2024-11-20 07:23:20.869606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.411 [2024-11-20 07:23:20.869638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.411 qpair failed and we were unable to recover it. 00:27:16.411 [2024-11-20 07:23:20.869857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.411 [2024-11-20 07:23:20.869896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.411 qpair failed and we were unable to recover it. 00:27:16.411 [2024-11-20 07:23:20.870087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.411 [2024-11-20 07:23:20.870121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.411 qpair failed and we were unable to recover it. 00:27:16.411 [2024-11-20 07:23:20.870343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.411 [2024-11-20 07:23:20.870375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.411 qpair failed and we were unable to recover it. 00:27:16.411 [2024-11-20 07:23:20.870632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.411 [2024-11-20 07:23:20.870663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.411 qpair failed and we were unable to recover it. 00:27:16.411 [2024-11-20 07:23:20.870880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.411 [2024-11-20 07:23:20.870913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.411 qpair failed and we were unable to recover it. 00:27:16.411 [2024-11-20 07:23:20.871126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.411 [2024-11-20 07:23:20.871158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.411 qpair failed and we were unable to recover it. 00:27:16.411 [2024-11-20 07:23:20.871389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.411 [2024-11-20 07:23:20.871422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.411 qpair failed and we were unable to recover it. 00:27:16.411 [2024-11-20 07:23:20.871673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.411 [2024-11-20 07:23:20.871705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.411 qpair failed and we were unable to recover it. 00:27:16.411 [2024-11-20 07:23:20.871911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.411 [2024-11-20 07:23:20.871943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.411 qpair failed and we were unable to recover it. 00:27:16.411 [2024-11-20 07:23:20.872221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.411 [2024-11-20 07:23:20.872254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.411 qpair failed and we were unable to recover it. 00:27:16.411 [2024-11-20 07:23:20.872448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.411 [2024-11-20 07:23:20.872480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.411 qpair failed and we were unable to recover it. 00:27:16.411 [2024-11-20 07:23:20.872733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.411 [2024-11-20 07:23:20.872764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.411 qpair failed and we were unable to recover it. 00:27:16.411 [2024-11-20 07:23:20.872969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.411 [2024-11-20 07:23:20.873004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.411 qpair failed and we were unable to recover it. 00:27:16.411 [2024-11-20 07:23:20.873221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.411 [2024-11-20 07:23:20.873252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.411 qpair failed and we were unable to recover it. 00:27:16.411 [2024-11-20 07:23:20.873564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.411 [2024-11-20 07:23:20.873596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.411 qpair failed and we were unable to recover it. 00:27:16.411 [2024-11-20 07:23:20.873795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.411 [2024-11-20 07:23:20.873827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.411 qpair failed and we were unable to recover it. 00:27:16.411 [2024-11-20 07:23:20.874014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.411 [2024-11-20 07:23:20.874048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.411 qpair failed and we were unable to recover it. 00:27:16.411 [2024-11-20 07:23:20.874327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.411 [2024-11-20 07:23:20.874359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.411 qpair failed and we were unable to recover it. 00:27:16.411 [2024-11-20 07:23:20.874500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.411 [2024-11-20 07:23:20.874532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.411 qpair failed and we were unable to recover it. 00:27:16.411 [2024-11-20 07:23:20.874785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.411 [2024-11-20 07:23:20.874816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.411 qpair failed and we were unable to recover it. 00:27:16.411 [2024-11-20 07:23:20.875121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.411 [2024-11-20 07:23:20.875155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.411 qpair failed and we were unable to recover it. 00:27:16.411 [2024-11-20 07:23:20.875439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.411 [2024-11-20 07:23:20.875472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.411 qpair failed and we were unable to recover it. 00:27:16.411 [2024-11-20 07:23:20.875749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.411 [2024-11-20 07:23:20.875781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.412 qpair failed and we were unable to recover it. 00:27:16.412 [2024-11-20 07:23:20.876075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.412 [2024-11-20 07:23:20.876108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.412 qpair failed and we were unable to recover it. 00:27:16.412 [2024-11-20 07:23:20.876238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.412 [2024-11-20 07:23:20.876271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.412 qpair failed and we were unable to recover it. 00:27:16.412 [2024-11-20 07:23:20.876462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.412 [2024-11-20 07:23:20.876494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.412 qpair failed and we were unable to recover it. 00:27:16.412 [2024-11-20 07:23:20.876764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.412 [2024-11-20 07:23:20.876796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.412 qpair failed and we were unable to recover it. 00:27:16.412 [2024-11-20 07:23:20.877057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.412 [2024-11-20 07:23:20.877091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.412 qpair failed and we were unable to recover it. 00:27:16.412 [2024-11-20 07:23:20.877347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.412 [2024-11-20 07:23:20.877379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.412 qpair failed and we were unable to recover it. 00:27:16.412 [2024-11-20 07:23:20.877570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.412 [2024-11-20 07:23:20.877602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.412 qpair failed and we were unable to recover it. 00:27:16.412 [2024-11-20 07:23:20.877798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.412 [2024-11-20 07:23:20.877830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.412 qpair failed and we were unable to recover it. 00:27:16.412 [2024-11-20 07:23:20.878102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.412 [2024-11-20 07:23:20.878136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.412 qpair failed and we were unable to recover it. 00:27:16.412 [2024-11-20 07:23:20.878282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.412 [2024-11-20 07:23:20.878315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.412 qpair failed and we were unable to recover it. 00:27:16.412 [2024-11-20 07:23:20.878497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.412 [2024-11-20 07:23:20.878528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.412 qpair failed and we were unable to recover it. 00:27:16.412 [2024-11-20 07:23:20.878780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.412 [2024-11-20 07:23:20.878812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.412 qpair failed and we were unable to recover it. 00:27:16.412 [2024-11-20 07:23:20.879063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.412 [2024-11-20 07:23:20.879096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.412 qpair failed and we were unable to recover it. 00:27:16.412 [2024-11-20 07:23:20.879396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.412 [2024-11-20 07:23:20.879428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.412 qpair failed and we were unable to recover it. 00:27:16.412 [2024-11-20 07:23:20.879696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.412 [2024-11-20 07:23:20.879727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.412 qpair failed and we were unable to recover it. 00:27:16.412 [2024-11-20 07:23:20.879930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.412 [2024-11-20 07:23:20.879972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.412 qpair failed and we were unable to recover it. 00:27:16.412 [2024-11-20 07:23:20.880197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.412 [2024-11-20 07:23:20.880228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.412 qpair failed and we were unable to recover it. 00:27:16.412 [2024-11-20 07:23:20.880481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.412 [2024-11-20 07:23:20.880520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.412 qpair failed and we were unable to recover it. 00:27:16.412 [2024-11-20 07:23:20.880804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.412 [2024-11-20 07:23:20.880837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.412 qpair failed and we were unable to recover it. 00:27:16.412 [2024-11-20 07:23:20.881115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.412 [2024-11-20 07:23:20.881148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.412 qpair failed and we were unable to recover it. 00:27:16.412 [2024-11-20 07:23:20.881291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.412 [2024-11-20 07:23:20.881323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.412 qpair failed and we were unable to recover it. 00:27:16.412 [2024-11-20 07:23:20.881621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.412 [2024-11-20 07:23:20.881654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.412 qpair failed and we were unable to recover it. 00:27:16.412 [2024-11-20 07:23:20.881956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.412 [2024-11-20 07:23:20.881989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.412 qpair failed and we were unable to recover it. 00:27:16.412 [2024-11-20 07:23:20.882116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.412 [2024-11-20 07:23:20.882148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.412 qpair failed and we were unable to recover it. 00:27:16.412 [2024-11-20 07:23:20.882450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.412 [2024-11-20 07:23:20.882482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.412 qpair failed and we were unable to recover it. 00:27:16.412 [2024-11-20 07:23:20.882732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.412 [2024-11-20 07:23:20.882764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.412 qpair failed and we were unable to recover it. 00:27:16.412 [2024-11-20 07:23:20.883074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.412 [2024-11-20 07:23:20.883108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.412 qpair failed and we were unable to recover it. 00:27:16.412 [2024-11-20 07:23:20.883368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.412 [2024-11-20 07:23:20.883400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.412 qpair failed and we were unable to recover it. 00:27:16.412 [2024-11-20 07:23:20.883619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.412 [2024-11-20 07:23:20.883652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.412 qpair failed and we were unable to recover it. 00:27:16.412 [2024-11-20 07:23:20.883913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.412 [2024-11-20 07:23:20.883945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.412 qpair failed and we were unable to recover it. 00:27:16.412 [2024-11-20 07:23:20.884126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.412 [2024-11-20 07:23:20.884159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.412 qpair failed and we were unable to recover it. 00:27:16.413 [2024-11-20 07:23:20.884447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.413 [2024-11-20 07:23:20.884480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.413 qpair failed and we were unable to recover it. 00:27:16.413 [2024-11-20 07:23:20.884779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.413 [2024-11-20 07:23:20.884810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.413 qpair failed and we were unable to recover it. 00:27:16.413 [2024-11-20 07:23:20.885077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.413 [2024-11-20 07:23:20.885112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.413 qpair failed and we were unable to recover it. 00:27:16.413 [2024-11-20 07:23:20.885409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.413 [2024-11-20 07:23:20.885441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.413 qpair failed and we were unable to recover it. 00:27:16.413 [2024-11-20 07:23:20.885706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.413 [2024-11-20 07:23:20.885738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.413 qpair failed and we were unable to recover it. 00:27:16.413 [2024-11-20 07:23:20.886013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.413 [2024-11-20 07:23:20.886047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.413 qpair failed and we were unable to recover it. 00:27:16.413 [2024-11-20 07:23:20.886253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.413 [2024-11-20 07:23:20.886285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.413 qpair failed and we were unable to recover it. 00:27:16.413 [2024-11-20 07:23:20.886540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.413 [2024-11-20 07:23:20.886573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.413 qpair failed and we were unable to recover it. 00:27:16.413 [2024-11-20 07:23:20.886825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.413 [2024-11-20 07:23:20.886860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.413 qpair failed and we were unable to recover it. 00:27:16.413 [2024-11-20 07:23:20.887010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.413 [2024-11-20 07:23:20.887044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.413 qpair failed and we were unable to recover it. 00:27:16.413 [2024-11-20 07:23:20.887247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.413 [2024-11-20 07:23:20.887282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.413 qpair failed and we were unable to recover it. 00:27:16.413 [2024-11-20 07:23:20.887562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.413 [2024-11-20 07:23:20.887593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.413 qpair failed and we were unable to recover it. 00:27:16.413 [2024-11-20 07:23:20.887793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.413 [2024-11-20 07:23:20.887826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.413 qpair failed and we were unable to recover it. 00:27:16.413 [2024-11-20 07:23:20.888039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.413 [2024-11-20 07:23:20.888073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.413 qpair failed and we were unable to recover it. 00:27:16.413 [2024-11-20 07:23:20.888204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.413 [2024-11-20 07:23:20.888236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.413 qpair failed and we were unable to recover it. 00:27:16.413 [2024-11-20 07:23:20.888535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.413 [2024-11-20 07:23:20.888567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.413 qpair failed and we were unable to recover it. 00:27:16.413 [2024-11-20 07:23:20.888861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.413 [2024-11-20 07:23:20.888894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.413 qpair failed and we were unable to recover it. 00:27:16.413 [2024-11-20 07:23:20.889045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.413 [2024-11-20 07:23:20.889078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.413 qpair failed and we were unable to recover it. 00:27:16.413 [2024-11-20 07:23:20.889330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.413 [2024-11-20 07:23:20.889362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.413 qpair failed and we were unable to recover it. 00:27:16.413 [2024-11-20 07:23:20.889515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.413 [2024-11-20 07:23:20.889548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.413 qpair failed and we were unable to recover it. 00:27:16.413 [2024-11-20 07:23:20.889672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.413 [2024-11-20 07:23:20.889705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.413 qpair failed and we were unable to recover it. 00:27:16.413 [2024-11-20 07:23:20.889983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.413 [2024-11-20 07:23:20.890019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.413 qpair failed and we were unable to recover it. 00:27:16.413 [2024-11-20 07:23:20.890285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.413 [2024-11-20 07:23:20.890317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.413 qpair failed and we were unable to recover it. 00:27:16.413 [2024-11-20 07:23:20.890609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.413 [2024-11-20 07:23:20.890641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.413 qpair failed and we were unable to recover it. 00:27:16.413 [2024-11-20 07:23:20.890918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.413 [2024-11-20 07:23:20.890962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.413 qpair failed and we were unable to recover it. 00:27:16.413 [2024-11-20 07:23:20.891113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.413 [2024-11-20 07:23:20.891146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.413 qpair failed and we were unable to recover it. 00:27:16.413 [2024-11-20 07:23:20.891418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.413 [2024-11-20 07:23:20.891455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.413 qpair failed and we were unable to recover it. 00:27:16.413 [2024-11-20 07:23:20.891664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.413 [2024-11-20 07:23:20.891697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.413 qpair failed and we were unable to recover it. 00:27:16.413 [2024-11-20 07:23:20.891964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.413 [2024-11-20 07:23:20.892000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.413 qpair failed and we were unable to recover it. 00:27:16.413 [2024-11-20 07:23:20.892186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.413 [2024-11-20 07:23:20.892219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.413 qpair failed and we were unable to recover it. 00:27:16.413 [2024-11-20 07:23:20.892442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.413 [2024-11-20 07:23:20.892475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.413 qpair failed and we were unable to recover it. 00:27:16.413 [2024-11-20 07:23:20.892692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.414 [2024-11-20 07:23:20.892725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.414 qpair failed and we were unable to recover it. 00:27:16.414 [2024-11-20 07:23:20.893008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.414 [2024-11-20 07:23:20.893044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.414 qpair failed and we were unable to recover it. 00:27:16.414 [2024-11-20 07:23:20.893248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.414 [2024-11-20 07:23:20.893279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.414 qpair failed and we were unable to recover it. 00:27:16.414 [2024-11-20 07:23:20.893483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.414 [2024-11-20 07:23:20.893516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.414 qpair failed and we were unable to recover it. 00:27:16.414 [2024-11-20 07:23:20.893792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.414 [2024-11-20 07:23:20.893823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.414 qpair failed and we were unable to recover it. 00:27:16.414 [2024-11-20 07:23:20.894029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.414 [2024-11-20 07:23:20.894065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.414 qpair failed and we were unable to recover it. 00:27:16.414 [2024-11-20 07:23:20.894341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.414 [2024-11-20 07:23:20.894373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.414 qpair failed and we were unable to recover it. 00:27:16.414 [2024-11-20 07:23:20.894570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.414 [2024-11-20 07:23:20.894603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.414 qpair failed and we were unable to recover it. 00:27:16.414 [2024-11-20 07:23:20.894865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.414 [2024-11-20 07:23:20.894899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.414 qpair failed and we were unable to recover it. 00:27:16.414 [2024-11-20 07:23:20.895097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.414 [2024-11-20 07:23:20.895131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.414 qpair failed and we were unable to recover it. 00:27:16.414 [2024-11-20 07:23:20.895311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.414 [2024-11-20 07:23:20.895345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.414 qpair failed and we were unable to recover it. 00:27:16.414 [2024-11-20 07:23:20.895599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.414 [2024-11-20 07:23:20.895631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.414 qpair failed and we were unable to recover it. 00:27:16.414 [2024-11-20 07:23:20.895933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.414 [2024-11-20 07:23:20.895977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.414 qpair failed and we were unable to recover it. 00:27:16.414 [2024-11-20 07:23:20.896259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.414 [2024-11-20 07:23:20.896291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.414 qpair failed and we were unable to recover it. 00:27:16.414 [2024-11-20 07:23:20.896483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.414 [2024-11-20 07:23:20.896516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.414 qpair failed and we were unable to recover it. 00:27:16.414 [2024-11-20 07:23:20.896779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.414 [2024-11-20 07:23:20.896812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.414 qpair failed and we were unable to recover it. 00:27:16.414 [2024-11-20 07:23:20.897109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.414 [2024-11-20 07:23:20.897145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.414 qpair failed and we were unable to recover it. 00:27:16.414 [2024-11-20 07:23:20.897476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.414 [2024-11-20 07:23:20.897508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.414 qpair failed and we were unable to recover it. 00:27:16.414 [2024-11-20 07:23:20.897790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.414 [2024-11-20 07:23:20.897836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.414 qpair failed and we were unable to recover it. 00:27:16.414 [2024-11-20 07:23:20.898121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.414 [2024-11-20 07:23:20.898173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.414 qpair failed and we were unable to recover it. 00:27:16.414 [2024-11-20 07:23:20.898392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.414 [2024-11-20 07:23:20.898428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.414 qpair failed and we were unable to recover it. 00:27:16.414 [2024-11-20 07:23:20.898692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.414 [2024-11-20 07:23:20.898727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.414 qpair failed and we were unable to recover it. 00:27:16.414 [2024-11-20 07:23:20.898930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.414 [2024-11-20 07:23:20.898975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.414 qpair failed and we were unable to recover it. 00:27:16.414 [2024-11-20 07:23:20.899159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.414 [2024-11-20 07:23:20.899191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.414 qpair failed and we were unable to recover it. 00:27:16.414 [2024-11-20 07:23:20.899468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.414 [2024-11-20 07:23:20.899503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.414 qpair failed and we were unable to recover it. 00:27:16.414 [2024-11-20 07:23:20.899703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.414 [2024-11-20 07:23:20.899736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.414 qpair failed and we were unable to recover it. 00:27:16.414 [2024-11-20 07:23:20.900040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.414 [2024-11-20 07:23:20.900080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.414 qpair failed and we were unable to recover it. 00:27:16.414 [2024-11-20 07:23:20.900330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.414 [2024-11-20 07:23:20.900381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.415 qpair failed and we were unable to recover it. 00:27:16.415 [2024-11-20 07:23:20.900677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.415 [2024-11-20 07:23:20.900718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.415 qpair failed and we were unable to recover it. 00:27:16.415 [2024-11-20 07:23:20.901003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.415 [2024-11-20 07:23:20.901041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.415 qpair failed and we were unable to recover it. 00:27:16.415 [2024-11-20 07:23:20.901306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.415 [2024-11-20 07:23:20.901339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.415 qpair failed and we were unable to recover it. 00:27:16.415 [2024-11-20 07:23:20.901615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.415 [2024-11-20 07:23:20.901647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.415 qpair failed and we were unable to recover it. 00:27:16.415 [2024-11-20 07:23:20.901868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.415 [2024-11-20 07:23:20.901901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.415 qpair failed and we were unable to recover it. 00:27:16.415 [2024-11-20 07:23:20.902102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.415 [2024-11-20 07:23:20.902144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.415 qpair failed and we were unable to recover it. 00:27:16.415 [2024-11-20 07:23:20.902427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.415 [2024-11-20 07:23:20.902479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.415 qpair failed and we were unable to recover it. 00:27:16.695 [2024-11-20 07:23:20.902763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-11-20 07:23:20.902810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-11-20 07:23:20.903091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-11-20 07:23:20.903126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-11-20 07:23:20.903329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-11-20 07:23:20.903361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-11-20 07:23:20.903626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-11-20 07:23:20.903659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-11-20 07:23:20.903934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-11-20 07:23:20.903983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-11-20 07:23:20.904271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-11-20 07:23:20.904304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-11-20 07:23:20.904497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-11-20 07:23:20.904532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-11-20 07:23:20.904816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-11-20 07:23:20.904851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-11-20 07:23:20.905039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-11-20 07:23:20.905074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-11-20 07:23:20.905324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-11-20 07:23:20.905356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-11-20 07:23:20.905639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-11-20 07:23:20.905671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-11-20 07:23:20.905978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-11-20 07:23:20.906013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-11-20 07:23:20.906262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-11-20 07:23:20.906295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-11-20 07:23:20.906578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-11-20 07:23:20.906611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-11-20 07:23:20.906896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-11-20 07:23:20.906929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-11-20 07:23:20.907068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-11-20 07:23:20.907101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-11-20 07:23:20.907354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-11-20 07:23:20.907387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-11-20 07:23:20.907665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-11-20 07:23:20.907698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-11-20 07:23:20.907960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-11-20 07:23:20.907996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-11-20 07:23:20.908251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.695 [2024-11-20 07:23:20.908284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.695 qpair failed and we were unable to recover it. 00:27:16.695 [2024-11-20 07:23:20.908569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-11-20 07:23:20.908600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-11-20 07:23:20.908916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-11-20 07:23:20.908963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-11-20 07:23:20.909269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-11-20 07:23:20.909302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-11-20 07:23:20.909483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-11-20 07:23:20.909515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-11-20 07:23:20.909746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-11-20 07:23:20.909779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-11-20 07:23:20.909976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-11-20 07:23:20.910010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-11-20 07:23:20.910319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-11-20 07:23:20.910352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-11-20 07:23:20.910729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-11-20 07:23:20.910808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-11-20 07:23:20.911098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-11-20 07:23:20.911139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-11-20 07:23:20.911380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-11-20 07:23:20.911415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-11-20 07:23:20.911619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-11-20 07:23:20.911652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-11-20 07:23:20.911867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-11-20 07:23:20.911901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-11-20 07:23:20.912140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-11-20 07:23:20.912175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-11-20 07:23:20.912449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-11-20 07:23:20.912482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-11-20 07:23:20.912775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-11-20 07:23:20.912809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-11-20 07:23:20.913082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-11-20 07:23:20.913118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-11-20 07:23:20.913336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-11-20 07:23:20.913368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-11-20 07:23:20.913661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-11-20 07:23:20.913693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-11-20 07:23:20.913916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-11-20 07:23:20.913960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-11-20 07:23:20.914241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-11-20 07:23:20.914274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-11-20 07:23:20.914452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-11-20 07:23:20.914495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-11-20 07:23:20.914721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-11-20 07:23:20.914755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-11-20 07:23:20.915012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-11-20 07:23:20.915047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-11-20 07:23:20.915190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-11-20 07:23:20.915224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-11-20 07:23:20.915425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-11-20 07:23:20.915458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-11-20 07:23:20.915643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-11-20 07:23:20.915676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-11-20 07:23:20.915874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-11-20 07:23:20.915907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-11-20 07:23:20.916096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-11-20 07:23:20.916130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-11-20 07:23:20.916406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-11-20 07:23:20.916439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-11-20 07:23:20.916582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-11-20 07:23:20.916615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-11-20 07:23:20.916867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-11-20 07:23:20.916899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.696 [2024-11-20 07:23:20.917193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.696 [2024-11-20 07:23:20.917227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.696 qpair failed and we were unable to recover it. 00:27:16.697 [2024-11-20 07:23:20.917475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-11-20 07:23:20.917508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-11-20 07:23:20.917799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-11-20 07:23:20.917834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-11-20 07:23:20.918037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-11-20 07:23:20.918090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-11-20 07:23:20.918375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-11-20 07:23:20.918407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-11-20 07:23:20.918544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-11-20 07:23:20.918578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-11-20 07:23:20.918802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-11-20 07:23:20.918835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-11-20 07:23:20.918942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-11-20 07:23:20.918989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-11-20 07:23:20.919240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-11-20 07:23:20.919273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-11-20 07:23:20.919554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-11-20 07:23:20.919587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-11-20 07:23:20.919841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-11-20 07:23:20.919873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-11-20 07:23:20.920153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-11-20 07:23:20.920187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-11-20 07:23:20.920469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-11-20 07:23:20.920503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-11-20 07:23:20.920782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-11-20 07:23:20.920814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-11-20 07:23:20.921101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-11-20 07:23:20.921134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-11-20 07:23:20.921343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-11-20 07:23:20.921375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-11-20 07:23:20.921499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-11-20 07:23:20.921532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-11-20 07:23:20.921735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-11-20 07:23:20.921768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-11-20 07:23:20.921999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-11-20 07:23:20.922033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-11-20 07:23:20.922264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-11-20 07:23:20.922297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-11-20 07:23:20.922573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-11-20 07:23:20.922607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-11-20 07:23:20.922895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-11-20 07:23:20.922927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-11-20 07:23:20.923228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-11-20 07:23:20.923262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-11-20 07:23:20.923526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-11-20 07:23:20.923558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-11-20 07:23:20.923858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-11-20 07:23:20.923891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-11-20 07:23:20.924160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-11-20 07:23:20.924194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-11-20 07:23:20.924490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-11-20 07:23:20.924524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-11-20 07:23:20.924812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-11-20 07:23:20.924845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-11-20 07:23:20.925043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.697 [2024-11-20 07:23:20.925077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.697 qpair failed and we were unable to recover it. 00:27:16.697 [2024-11-20 07:23:20.925279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-11-20 07:23:20.925319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-11-20 07:23:20.925526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-11-20 07:23:20.925558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-11-20 07:23:20.925767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-11-20 07:23:20.925800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-11-20 07:23:20.926020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-11-20 07:23:20.926055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-11-20 07:23:20.926310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-11-20 07:23:20.926343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-11-20 07:23:20.926496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-11-20 07:23:20.926528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-11-20 07:23:20.926742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-11-20 07:23:20.926778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-11-20 07:23:20.927080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-11-20 07:23:20.927115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-11-20 07:23:20.927374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-11-20 07:23:20.927408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-11-20 07:23:20.927636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-11-20 07:23:20.927668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-11-20 07:23:20.927868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-11-20 07:23:20.927902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-11-20 07:23:20.928213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-11-20 07:23:20.928247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-11-20 07:23:20.928511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-11-20 07:23:20.928545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-11-20 07:23:20.928743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-11-20 07:23:20.928776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-11-20 07:23:20.928988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-11-20 07:23:20.929023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-11-20 07:23:20.929225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-11-20 07:23:20.929257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-11-20 07:23:20.929560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-11-20 07:23:20.929593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-11-20 07:23:20.929824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-11-20 07:23:20.929858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-11-20 07:23:20.930135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-11-20 07:23:20.930170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-11-20 07:23:20.930351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-11-20 07:23:20.930384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-11-20 07:23:20.930590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-11-20 07:23:20.930622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-11-20 07:23:20.930889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-11-20 07:23:20.930921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-11-20 07:23:20.931135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-11-20 07:23:20.931170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-11-20 07:23:20.931473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-11-20 07:23:20.931505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-11-20 07:23:20.931767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-11-20 07:23:20.931802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-11-20 07:23:20.932029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-11-20 07:23:20.932064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-11-20 07:23:20.932195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-11-20 07:23:20.932228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-11-20 07:23:20.932493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-11-20 07:23:20.932574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-11-20 07:23:20.932824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-11-20 07:23:20.932862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-11-20 07:23:20.933143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-11-20 07:23:20.933181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.698 qpair failed and we were unable to recover it. 00:27:16.698 [2024-11-20 07:23:20.933441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.698 [2024-11-20 07:23:20.933475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-11-20 07:23:20.933761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-11-20 07:23:20.933793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-11-20 07:23:20.933929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-11-20 07:23:20.933977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-11-20 07:23:20.934259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-11-20 07:23:20.934291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-11-20 07:23:20.934499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-11-20 07:23:20.934532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-11-20 07:23:20.934829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-11-20 07:23:20.934861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-11-20 07:23:20.935134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-11-20 07:23:20.935167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-11-20 07:23:20.935294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-11-20 07:23:20.935327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-11-20 07:23:20.935544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-11-20 07:23:20.935579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-11-20 07:23:20.935805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-11-20 07:23:20.935838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-11-20 07:23:20.936120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-11-20 07:23:20.936156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-11-20 07:23:20.936440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-11-20 07:23:20.936472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-11-20 07:23:20.936748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-11-20 07:23:20.936780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-11-20 07:23:20.937068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-11-20 07:23:20.937102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-11-20 07:23:20.937302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-11-20 07:23:20.937335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-11-20 07:23:20.937551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-11-20 07:23:20.937584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-11-20 07:23:20.937859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-11-20 07:23:20.937890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-11-20 07:23:20.938199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-11-20 07:23:20.938233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-11-20 07:23:20.938510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-11-20 07:23:20.938544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-11-20 07:23:20.938824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-11-20 07:23:20.938857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-11-20 07:23:20.939136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-11-20 07:23:20.939169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-11-20 07:23:20.939362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-11-20 07:23:20.939393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-11-20 07:23:20.939651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-11-20 07:23:20.939684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-11-20 07:23:20.939968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-11-20 07:23:20.940003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-11-20 07:23:20.940229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-11-20 07:23:20.940268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-11-20 07:23:20.940538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-11-20 07:23:20.940572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-11-20 07:23:20.940827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-11-20 07:23:20.940859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-11-20 07:23:20.941111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-11-20 07:23:20.941145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-11-20 07:23:20.941400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-11-20 07:23:20.941435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-11-20 07:23:20.941661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-11-20 07:23:20.941694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-11-20 07:23:20.941875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-11-20 07:23:20.941909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-11-20 07:23:20.942176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.699 [2024-11-20 07:23:20.942208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.699 qpair failed and we were unable to recover it. 00:27:16.699 [2024-11-20 07:23:20.942435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-11-20 07:23:20.942466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-11-20 07:23:20.942652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-11-20 07:23:20.942685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-11-20 07:23:20.942798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-11-20 07:23:20.942831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-11-20 07:23:20.943027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-11-20 07:23:20.943062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-11-20 07:23:20.943336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-11-20 07:23:20.943371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-11-20 07:23:20.943555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-11-20 07:23:20.943587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-11-20 07:23:20.943778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-11-20 07:23:20.943812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-11-20 07:23:20.944041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-11-20 07:23:20.944076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-11-20 07:23:20.944331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-11-20 07:23:20.944364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-11-20 07:23:20.944662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-11-20 07:23:20.944695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-11-20 07:23:20.944972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-11-20 07:23:20.945007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-11-20 07:23:20.945274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-11-20 07:23:20.945308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-11-20 07:23:20.945440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-11-20 07:23:20.945472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-11-20 07:23:20.945743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-11-20 07:23:20.945775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-11-20 07:23:20.946027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-11-20 07:23:20.946061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-11-20 07:23:20.946328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-11-20 07:23:20.946360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-11-20 07:23:20.946641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-11-20 07:23:20.946673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-11-20 07:23:20.946888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-11-20 07:23:20.946920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-11-20 07:23:20.950178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-11-20 07:23:20.950216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-11-20 07:23:20.950475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-11-20 07:23:20.950522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-11-20 07:23:20.950785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-11-20 07:23:20.950817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-11-20 07:23:20.951026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-11-20 07:23:20.951060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-11-20 07:23:20.951335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-11-20 07:23:20.951368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-11-20 07:23:20.951575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-11-20 07:23:20.951608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-11-20 07:23:20.951878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-11-20 07:23:20.951909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-11-20 07:23:20.952201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-11-20 07:23:20.952234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-11-20 07:23:20.952428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-11-20 07:23:20.952460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.700 qpair failed and we were unable to recover it. 00:27:16.700 [2024-11-20 07:23:20.952733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.700 [2024-11-20 07:23:20.952763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-11-20 07:23:20.952988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-11-20 07:23:20.953022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-11-20 07:23:20.953279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-11-20 07:23:20.953311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-11-20 07:23:20.953607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-11-20 07:23:20.953640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-11-20 07:23:20.953938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-11-20 07:23:20.953984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-11-20 07:23:20.954236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-11-20 07:23:20.954267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-11-20 07:23:20.954457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-11-20 07:23:20.954490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-11-20 07:23:20.954741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-11-20 07:23:20.954773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-11-20 07:23:20.954901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-11-20 07:23:20.954935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-11-20 07:23:20.955214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-11-20 07:23:20.955247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-11-20 07:23:20.955505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-11-20 07:23:20.955539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-11-20 07:23:20.955791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-11-20 07:23:20.955823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-11-20 07:23:20.956007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-11-20 07:23:20.956040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-11-20 07:23:20.956241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-11-20 07:23:20.956275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-11-20 07:23:20.956552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-11-20 07:23:20.956584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-11-20 07:23:20.956887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-11-20 07:23:20.956921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-11-20 07:23:20.957046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-11-20 07:23:20.957080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-11-20 07:23:20.957265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-11-20 07:23:20.957298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-11-20 07:23:20.957577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-11-20 07:23:20.957611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-11-20 07:23:20.957805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-11-20 07:23:20.957845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-11-20 07:23:20.958049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-11-20 07:23:20.958081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-11-20 07:23:20.958358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-11-20 07:23:20.958390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-11-20 07:23:20.958646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-11-20 07:23:20.958678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-11-20 07:23:20.958958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-11-20 07:23:20.958992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-11-20 07:23:20.959112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-11-20 07:23:20.959142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-11-20 07:23:20.959334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-11-20 07:23:20.959366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-11-20 07:23:20.959591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-11-20 07:23:20.959625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-11-20 07:23:20.959897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-11-20 07:23:20.959931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-11-20 07:23:20.960147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-11-20 07:23:20.960179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-11-20 07:23:20.960400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-11-20 07:23:20.960431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-11-20 07:23:20.960567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.701 [2024-11-20 07:23:20.960600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.701 qpair failed and we were unable to recover it. 00:27:16.701 [2024-11-20 07:23:20.960779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-11-20 07:23:20.960809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-11-20 07:23:20.960990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-11-20 07:23:20.961024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-11-20 07:23:20.961356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-11-20 07:23:20.961434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-11-20 07:23:20.961693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-11-20 07:23:20.961730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-11-20 07:23:20.961929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-11-20 07:23:20.961982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-11-20 07:23:20.962189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-11-20 07:23:20.962223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-11-20 07:23:20.962413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-11-20 07:23:20.962446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-11-20 07:23:20.962651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-11-20 07:23:20.962684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-11-20 07:23:20.962806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-11-20 07:23:20.962838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-11-20 07:23:20.962985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-11-20 07:23:20.963018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-11-20 07:23:20.963266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-11-20 07:23:20.963299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-11-20 07:23:20.963505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-11-20 07:23:20.963536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-11-20 07:23:20.963727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-11-20 07:23:20.963760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-11-20 07:23:20.963971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-11-20 07:23:20.964005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-11-20 07:23:20.964251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-11-20 07:23:20.964284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-11-20 07:23:20.964429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-11-20 07:23:20.964471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-11-20 07:23:20.964619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-11-20 07:23:20.964650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-11-20 07:23:20.964802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-11-20 07:23:20.964837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-11-20 07:23:20.964997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-11-20 07:23:20.965030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-11-20 07:23:20.965232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-11-20 07:23:20.965265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-11-20 07:23:20.965475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-11-20 07:23:20.965508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-11-20 07:23:20.965814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-11-20 07:23:20.965846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-11-20 07:23:20.966093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-11-20 07:23:20.966132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-11-20 07:23:20.966328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-11-20 07:23:20.966360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-11-20 07:23:20.966477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-11-20 07:23:20.966511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-11-20 07:23:20.966652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-11-20 07:23:20.966691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-11-20 07:23:20.966917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-11-20 07:23:20.966966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-11-20 07:23:20.967199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-11-20 07:23:20.967236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-11-20 07:23:20.967467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-11-20 07:23:20.967498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-11-20 07:23:20.967797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-11-20 07:23:20.967835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-11-20 07:23:20.967988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-11-20 07:23:20.968023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.702 [2024-11-20 07:23:20.968305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.702 [2024-11-20 07:23:20.968337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.702 qpair failed and we were unable to recover it. 00:27:16.703 [2024-11-20 07:23:20.968603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-11-20 07:23:20.968637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-11-20 07:23:20.968764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-11-20 07:23:20.968796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-11-20 07:23:20.968911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-11-20 07:23:20.968945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-11-20 07:23:20.969150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-11-20 07:23:20.969183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-11-20 07:23:20.969415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-11-20 07:23:20.969446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-11-20 07:23:20.969750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-11-20 07:23:20.969783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-11-20 07:23:20.969970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-11-20 07:23:20.970004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-11-20 07:23:20.970296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-11-20 07:23:20.970330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-11-20 07:23:20.970616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-11-20 07:23:20.970650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-11-20 07:23:20.970785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-11-20 07:23:20.970818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-11-20 07:23:20.971041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-11-20 07:23:20.971076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-11-20 07:23:20.971358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-11-20 07:23:20.971391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-11-20 07:23:20.971616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-11-20 07:23:20.971650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-11-20 07:23:20.971861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-11-20 07:23:20.971894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-11-20 07:23:20.972097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-11-20 07:23:20.972131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-11-20 07:23:20.972411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-11-20 07:23:20.972444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-11-20 07:23:20.972715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-11-20 07:23:20.972746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-11-20 07:23:20.973046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-11-20 07:23:20.973080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-11-20 07:23:20.973345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-11-20 07:23:20.973377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-11-20 07:23:20.973580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-11-20 07:23:20.973613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-11-20 07:23:20.973800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-11-20 07:23:20.973832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-11-20 07:23:20.974088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-11-20 07:23:20.974122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-11-20 07:23:20.974420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-11-20 07:23:20.974452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-11-20 07:23:20.974723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-11-20 07:23:20.974764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-11-20 07:23:20.975047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-11-20 07:23:20.975082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-11-20 07:23:20.975215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-11-20 07:23:20.975247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-11-20 07:23:20.975504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-11-20 07:23:20.975538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-11-20 07:23:20.975661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-11-20 07:23:20.975695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-11-20 07:23:20.975978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-11-20 07:23:20.976011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-11-20 07:23:20.976289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-11-20 07:23:20.976322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-11-20 07:23:20.976607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.703 [2024-11-20 07:23:20.976640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.703 qpair failed and we were unable to recover it. 00:27:16.703 [2024-11-20 07:23:20.976893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-11-20 07:23:20.976926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-11-20 07:23:20.977224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-11-20 07:23:20.977258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-11-20 07:23:20.977528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-11-20 07:23:20.977561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-11-20 07:23:20.977777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-11-20 07:23:20.977810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-11-20 07:23:20.978004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-11-20 07:23:20.978038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-11-20 07:23:20.978232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-11-20 07:23:20.978265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-11-20 07:23:20.978550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-11-20 07:23:20.978583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-11-20 07:23:20.978861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-11-20 07:23:20.978895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-11-20 07:23:20.979184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-11-20 07:23:20.979219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-11-20 07:23:20.979446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-11-20 07:23:20.979478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-11-20 07:23:20.979702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-11-20 07:23:20.979734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-11-20 07:23:20.979940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-11-20 07:23:20.979981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-11-20 07:23:20.980238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-11-20 07:23:20.980271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-11-20 07:23:20.980469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-11-20 07:23:20.980501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-11-20 07:23:20.980736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-11-20 07:23:20.980769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-11-20 07:23:20.981024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-11-20 07:23:20.981058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-11-20 07:23:20.981242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-11-20 07:23:20.981274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-11-20 07:23:20.981555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-11-20 07:23:20.981589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-11-20 07:23:20.981790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-11-20 07:23:20.981823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-11-20 07:23:20.982088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-11-20 07:23:20.982122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-11-20 07:23:20.982324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-11-20 07:23:20.982359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-11-20 07:23:20.982633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-11-20 07:23:20.982666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-11-20 07:23:20.982867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-11-20 07:23:20.982900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-11-20 07:23:20.983110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-11-20 07:23:20.983144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-11-20 07:23:20.983272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-11-20 07:23:20.983305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-11-20 07:23:20.983568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-11-20 07:23:20.983601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-11-20 07:23:20.983903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-11-20 07:23:20.983936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-11-20 07:23:20.984188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-11-20 07:23:20.984223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-11-20 07:23:20.984404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-11-20 07:23:20.984437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-11-20 07:23:20.984711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-11-20 07:23:20.984744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-11-20 07:23:20.984937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-11-20 07:23:20.984998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.704 qpair failed and we were unable to recover it. 00:27:16.704 [2024-11-20 07:23:20.985279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.704 [2024-11-20 07:23:20.985311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-11-20 07:23:20.985571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-11-20 07:23:20.985617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-11-20 07:23:20.985910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-11-20 07:23:20.985942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-11-20 07:23:20.986077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-11-20 07:23:20.986109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-11-20 07:23:20.986370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-11-20 07:23:20.986402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-11-20 07:23:20.986585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-11-20 07:23:20.986619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-11-20 07:23:20.986842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-11-20 07:23:20.986876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-11-20 07:23:20.987062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-11-20 07:23:20.987099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-11-20 07:23:20.987297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-11-20 07:23:20.987330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-11-20 07:23:20.987536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-11-20 07:23:20.987571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-11-20 07:23:20.987754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-11-20 07:23:20.987787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-11-20 07:23:20.987998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-11-20 07:23:20.988032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-11-20 07:23:20.988224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-11-20 07:23:20.988257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-11-20 07:23:20.988398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-11-20 07:23:20.988431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-11-20 07:23:20.988681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-11-20 07:23:20.988714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-11-20 07:23:20.988990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-11-20 07:23:20.989024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-11-20 07:23:20.989324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-11-20 07:23:20.989357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-11-20 07:23:20.989541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-11-20 07:23:20.989576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-11-20 07:23:20.989809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-11-20 07:23:20.989841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-11-20 07:23:20.990025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-11-20 07:23:20.990059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-11-20 07:23:20.990285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-11-20 07:23:20.990319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-11-20 07:23:20.990519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-11-20 07:23:20.990554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-11-20 07:23:20.990747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-11-20 07:23:20.990780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-11-20 07:23:20.990983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-11-20 07:23:20.991018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-11-20 07:23:20.991224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-11-20 07:23:20.991256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-11-20 07:23:20.991482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-11-20 07:23:20.991514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-11-20 07:23:20.991712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-11-20 07:23:20.991745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-11-20 07:23:20.992002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-11-20 07:23:20.992037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-11-20 07:23:20.992312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-11-20 07:23:20.992347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-11-20 07:23:20.992617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-11-20 07:23:20.992650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-11-20 07:23:20.992784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-11-20 07:23:20.992816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-11-20 07:23:20.993124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.705 [2024-11-20 07:23:20.993158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.705 qpair failed and we were unable to recover it. 00:27:16.705 [2024-11-20 07:23:20.993415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-11-20 07:23:20.993450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-11-20 07:23:20.993655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-11-20 07:23:20.993687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-11-20 07:23:20.993864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-11-20 07:23:20.993898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-11-20 07:23:20.994102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-11-20 07:23:20.994136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-11-20 07:23:20.994269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-11-20 07:23:20.994303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-11-20 07:23:20.994582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-11-20 07:23:20.994615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-11-20 07:23:20.994813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-11-20 07:23:20.994845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-11-20 07:23:20.995054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-11-20 07:23:20.995090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-11-20 07:23:20.995321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-11-20 07:23:20.995354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-11-20 07:23:20.995560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-11-20 07:23:20.995599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-11-20 07:23:20.995874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-11-20 07:23:20.995906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-11-20 07:23:20.996100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-11-20 07:23:20.996134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-11-20 07:23:20.996390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-11-20 07:23:20.996422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-11-20 07:23:20.996648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-11-20 07:23:20.996680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-11-20 07:23:20.996808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-11-20 07:23:20.996841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-11-20 07:23:20.997094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-11-20 07:23:20.997128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-11-20 07:23:20.997312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-11-20 07:23:20.997345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-11-20 07:23:20.997550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-11-20 07:23:20.997583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-11-20 07:23:20.997801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-11-20 07:23:20.997836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-11-20 07:23:20.998032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-11-20 07:23:20.998067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-11-20 07:23:20.998293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-11-20 07:23:20.998328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-11-20 07:23:20.998486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-11-20 07:23:20.998520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-11-20 07:23:20.998724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-11-20 07:23:20.998757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-11-20 07:23:20.998964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-11-20 07:23:20.998998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-11-20 07:23:20.999120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-11-20 07:23:20.999154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-11-20 07:23:20.999286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.706 [2024-11-20 07:23:20.999318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.706 qpair failed and we were unable to recover it. 00:27:16.706 [2024-11-20 07:23:20.999514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-11-20 07:23:20.999547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-11-20 07:23:20.999670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-11-20 07:23:20.999702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-11-20 07:23:20.999904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-11-20 07:23:20.999939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-11-20 07:23:21.000144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-11-20 07:23:21.000178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-11-20 07:23:21.000454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-11-20 07:23:21.000486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-11-20 07:23:21.000692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-11-20 07:23:21.000725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-11-20 07:23:21.000915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-11-20 07:23:21.000955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-11-20 07:23:21.001153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-11-20 07:23:21.001189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-11-20 07:23:21.001394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-11-20 07:23:21.001427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-11-20 07:23:21.001635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-11-20 07:23:21.001667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-11-20 07:23:21.001963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-11-20 07:23:21.001998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-11-20 07:23:21.002134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-11-20 07:23:21.002168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-11-20 07:23:21.002282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-11-20 07:23:21.002316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-11-20 07:23:21.002526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-11-20 07:23:21.002561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-11-20 07:23:21.002691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-11-20 07:23:21.002734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-11-20 07:23:21.003006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-11-20 07:23:21.003040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-11-20 07:23:21.003229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-11-20 07:23:21.003260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-11-20 07:23:21.003387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-11-20 07:23:21.003418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-11-20 07:23:21.003685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-11-20 07:23:21.003718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-11-20 07:23:21.003912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-11-20 07:23:21.003943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-11-20 07:23:21.004147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-11-20 07:23:21.004182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-11-20 07:23:21.004377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-11-20 07:23:21.004410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-11-20 07:23:21.004686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-11-20 07:23:21.004717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-11-20 07:23:21.004906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-11-20 07:23:21.004939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-11-20 07:23:21.005148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-11-20 07:23:21.005182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-11-20 07:23:21.005382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-11-20 07:23:21.005414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-11-20 07:23:21.005706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-11-20 07:23:21.005739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-11-20 07:23:21.005935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-11-20 07:23:21.005975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-11-20 07:23:21.006260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-11-20 07:23:21.006292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-11-20 07:23:21.006407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-11-20 07:23:21.006440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-11-20 07:23:21.006692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.707 [2024-11-20 07:23:21.006725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.707 qpair failed and we were unable to recover it. 00:27:16.707 [2024-11-20 07:23:21.006905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-11-20 07:23:21.006936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-11-20 07:23:21.007240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-11-20 07:23:21.007272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-11-20 07:23:21.007562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-11-20 07:23:21.007595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-11-20 07:23:21.007823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-11-20 07:23:21.007855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-11-20 07:23:21.008102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-11-20 07:23:21.008136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-11-20 07:23:21.008332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-11-20 07:23:21.008365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-11-20 07:23:21.008574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-11-20 07:23:21.008606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-11-20 07:23:21.008832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-11-20 07:23:21.008865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-11-20 07:23:21.008987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-11-20 07:23:21.009023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-11-20 07:23:21.009139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-11-20 07:23:21.009171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-11-20 07:23:21.009307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-11-20 07:23:21.009340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-11-20 07:23:21.009550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-11-20 07:23:21.009583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-11-20 07:23:21.009802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-11-20 07:23:21.009834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-11-20 07:23:21.010018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-11-20 07:23:21.010051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-11-20 07:23:21.010199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-11-20 07:23:21.010232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-11-20 07:23:21.010367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-11-20 07:23:21.010400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-11-20 07:23:21.010537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-11-20 07:23:21.010570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-11-20 07:23:21.010765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-11-20 07:23:21.010800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-11-20 07:23:21.011069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-11-20 07:23:21.011104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-11-20 07:23:21.011360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-11-20 07:23:21.011400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-11-20 07:23:21.011707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-11-20 07:23:21.011742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-11-20 07:23:21.011945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-11-20 07:23:21.011996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-11-20 07:23:21.012199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-11-20 07:23:21.012231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-11-20 07:23:21.012450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-11-20 07:23:21.012484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-11-20 07:23:21.012683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-11-20 07:23:21.012714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-11-20 07:23:21.012842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-11-20 07:23:21.012876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-11-20 07:23:21.013153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-11-20 07:23:21.013188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-11-20 07:23:21.013387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-11-20 07:23:21.013419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-11-20 07:23:21.013650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-11-20 07:23:21.013681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-11-20 07:23:21.013882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-11-20 07:23:21.013915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-11-20 07:23:21.014078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-11-20 07:23:21.014111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.708 [2024-11-20 07:23:21.014300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.708 [2024-11-20 07:23:21.014331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.708 qpair failed and we were unable to recover it. 00:27:16.709 [2024-11-20 07:23:21.014535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-11-20 07:23:21.014567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-11-20 07:23:21.014819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-11-20 07:23:21.014853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-11-20 07:23:21.015142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-11-20 07:23:21.015176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-11-20 07:23:21.015504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-11-20 07:23:21.015539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-11-20 07:23:21.015755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-11-20 07:23:21.015789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-11-20 07:23:21.016080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-11-20 07:23:21.016113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-11-20 07:23:21.016315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-11-20 07:23:21.016350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-11-20 07:23:21.016555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-11-20 07:23:21.016587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-11-20 07:23:21.016859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-11-20 07:23:21.016895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-11-20 07:23:21.017118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-11-20 07:23:21.017152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-11-20 07:23:21.017369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-11-20 07:23:21.017403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-11-20 07:23:21.017610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-11-20 07:23:21.017643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-11-20 07:23:21.017825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-11-20 07:23:21.017859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-11-20 07:23:21.018147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-11-20 07:23:21.018182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-11-20 07:23:21.018459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-11-20 07:23:21.018492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-11-20 07:23:21.018788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-11-20 07:23:21.018819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-11-20 07:23:21.019091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-11-20 07:23:21.019125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-11-20 07:23:21.019360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-11-20 07:23:21.019394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-11-20 07:23:21.019574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-11-20 07:23:21.019607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-11-20 07:23:21.019805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-11-20 07:23:21.019839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-11-20 07:23:21.020160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-11-20 07:23:21.020194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-11-20 07:23:21.020395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-11-20 07:23:21.020429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-11-20 07:23:21.020648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-11-20 07:23:21.020680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-11-20 07:23:21.020887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-11-20 07:23:21.020920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-11-20 07:23:21.021139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-11-20 07:23:21.021173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-11-20 07:23:21.021324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-11-20 07:23:21.021356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-11-20 07:23:21.021551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-11-20 07:23:21.021583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-11-20 07:23:21.021836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-11-20 07:23:21.021874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-11-20 07:23:21.022060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-11-20 07:23:21.022113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-11-20 07:23:21.022337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-11-20 07:23:21.022369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-11-20 07:23:21.022482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.709 [2024-11-20 07:23:21.022514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.709 qpair failed and we were unable to recover it. 00:27:16.709 [2024-11-20 07:23:21.022723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-11-20 07:23:21.022755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-11-20 07:23:21.022932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-11-20 07:23:21.022987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-11-20 07:23:21.023131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-11-20 07:23:21.023163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-11-20 07:23:21.023389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-11-20 07:23:21.023420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-11-20 07:23:21.023715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-11-20 07:23:21.023747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-11-20 07:23:21.023941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-11-20 07:23:21.023986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-11-20 07:23:21.024191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-11-20 07:23:21.024226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-11-20 07:23:21.024478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-11-20 07:23:21.024512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-11-20 07:23:21.024708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-11-20 07:23:21.024742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-11-20 07:23:21.024886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-11-20 07:23:21.024918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-11-20 07:23:21.025228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-11-20 07:23:21.025261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-11-20 07:23:21.025387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-11-20 07:23:21.025419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-11-20 07:23:21.025624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-11-20 07:23:21.025658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-11-20 07:23:21.025920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-11-20 07:23:21.025963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-11-20 07:23:21.026116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-11-20 07:23:21.026149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-11-20 07:23:21.026345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-11-20 07:23:21.026376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-11-20 07:23:21.026509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-11-20 07:23:21.026541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-11-20 07:23:21.026661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-11-20 07:23:21.026692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-11-20 07:23:21.026835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-11-20 07:23:21.026868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-11-20 07:23:21.026999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-11-20 07:23:21.027033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-11-20 07:23:21.027220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-11-20 07:23:21.027250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-11-20 07:23:21.027375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-11-20 07:23:21.027406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-11-20 07:23:21.027681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-11-20 07:23:21.027712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-11-20 07:23:21.027999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-11-20 07:23:21.028034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-11-20 07:23:21.028238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-11-20 07:23:21.028270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-11-20 07:23:21.028462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-11-20 07:23:21.028493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-11-20 07:23:21.028674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-11-20 07:23:21.028706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-11-20 07:23:21.028906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-11-20 07:23:21.028939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-11-20 07:23:21.029201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-11-20 07:23:21.029233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-11-20 07:23:21.029487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-11-20 07:23:21.029520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-11-20 07:23:21.029784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-11-20 07:23:21.029815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.710 [2024-11-20 07:23:21.030008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.710 [2024-11-20 07:23:21.030041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.710 qpair failed and we were unable to recover it. 00:27:16.711 [2024-11-20 07:23:21.030234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-11-20 07:23:21.030267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-11-20 07:23:21.030518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-11-20 07:23:21.030550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-11-20 07:23:21.030727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-11-20 07:23:21.030757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-11-20 07:23:21.031037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-11-20 07:23:21.031071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-11-20 07:23:21.031278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-11-20 07:23:21.031317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-11-20 07:23:21.031467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-11-20 07:23:21.031498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-11-20 07:23:21.031778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-11-20 07:23:21.031809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-11-20 07:23:21.032056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-11-20 07:23:21.032088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-11-20 07:23:21.032287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-11-20 07:23:21.032318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-11-20 07:23:21.032577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-11-20 07:23:21.032608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-11-20 07:23:21.032822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-11-20 07:23:21.032854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-11-20 07:23:21.033001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-11-20 07:23:21.033035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-11-20 07:23:21.033228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-11-20 07:23:21.033258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-11-20 07:23:21.033507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-11-20 07:23:21.033537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-11-20 07:23:21.033743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-11-20 07:23:21.033775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-11-20 07:23:21.034095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-11-20 07:23:21.034127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-11-20 07:23:21.034301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-11-20 07:23:21.034333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-11-20 07:23:21.034588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-11-20 07:23:21.034620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-11-20 07:23:21.034824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-11-20 07:23:21.034855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-11-20 07:23:21.035041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-11-20 07:23:21.035074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-11-20 07:23:21.035253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-11-20 07:23:21.035284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-11-20 07:23:21.035569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-11-20 07:23:21.035601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-11-20 07:23:21.035820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-11-20 07:23:21.035851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-11-20 07:23:21.036061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-11-20 07:23:21.036094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-11-20 07:23:21.036281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-11-20 07:23:21.036313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-11-20 07:23:21.036461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-11-20 07:23:21.036493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-11-20 07:23:21.036699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-11-20 07:23:21.036729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-11-20 07:23:21.036869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-11-20 07:23:21.036901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-11-20 07:23:21.037180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.711 [2024-11-20 07:23:21.037213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.711 qpair failed and we were unable to recover it. 00:27:16.711 [2024-11-20 07:23:21.037417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-11-20 07:23:21.037449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-11-20 07:23:21.037747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-11-20 07:23:21.037778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-11-20 07:23:21.037977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-11-20 07:23:21.038010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-11-20 07:23:21.038263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-11-20 07:23:21.038293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-11-20 07:23:21.038506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-11-20 07:23:21.038537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-11-20 07:23:21.038754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-11-20 07:23:21.038785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-11-20 07:23:21.039057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-11-20 07:23:21.039088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-11-20 07:23:21.039350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-11-20 07:23:21.039382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-11-20 07:23:21.039514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-11-20 07:23:21.039546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-11-20 07:23:21.039892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-11-20 07:23:21.039922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-11-20 07:23:21.040148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-11-20 07:23:21.040181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-11-20 07:23:21.040401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-11-20 07:23:21.040438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-11-20 07:23:21.040692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-11-20 07:23:21.040723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-11-20 07:23:21.041038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-11-20 07:23:21.041071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-11-20 07:23:21.041272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-11-20 07:23:21.041303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-11-20 07:23:21.041555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-11-20 07:23:21.041592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-11-20 07:23:21.041740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-11-20 07:23:21.041772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-11-20 07:23:21.041958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-11-20 07:23:21.041990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-11-20 07:23:21.042205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-11-20 07:23:21.042237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-11-20 07:23:21.042427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-11-20 07:23:21.042460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-11-20 07:23:21.042640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-11-20 07:23:21.042671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-11-20 07:23:21.042954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-11-20 07:23:21.042987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-11-20 07:23:21.043201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-11-20 07:23:21.043232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-11-20 07:23:21.043509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-11-20 07:23:21.043542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-11-20 07:23:21.043736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-11-20 07:23:21.043768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-11-20 07:23:21.043990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-11-20 07:23:21.044024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-11-20 07:23:21.044281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-11-20 07:23:21.044313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-11-20 07:23:21.044495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-11-20 07:23:21.044527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-11-20 07:23:21.044745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.712 [2024-11-20 07:23:21.044777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.712 qpair failed and we were unable to recover it. 00:27:16.712 [2024-11-20 07:23:21.045070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-11-20 07:23:21.045104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-11-20 07:23:21.045325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-11-20 07:23:21.045357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-11-20 07:23:21.045553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-11-20 07:23:21.045584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-11-20 07:23:21.045798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-11-20 07:23:21.045831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-11-20 07:23:21.046034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-11-20 07:23:21.046067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-11-20 07:23:21.046206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-11-20 07:23:21.046238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-11-20 07:23:21.046510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-11-20 07:23:21.046541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-11-20 07:23:21.046731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-11-20 07:23:21.046763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-11-20 07:23:21.046984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-11-20 07:23:21.047017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-11-20 07:23:21.047219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-11-20 07:23:21.047251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-11-20 07:23:21.047395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-11-20 07:23:21.047428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-11-20 07:23:21.047738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-11-20 07:23:21.047770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-11-20 07:23:21.048053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-11-20 07:23:21.048087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-11-20 07:23:21.048221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-11-20 07:23:21.048254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-11-20 07:23:21.048518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-11-20 07:23:21.048549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-11-20 07:23:21.048825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-11-20 07:23:21.048858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-11-20 07:23:21.049077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-11-20 07:23:21.049111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-11-20 07:23:21.049316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-11-20 07:23:21.049349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-11-20 07:23:21.049598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-11-20 07:23:21.049631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-11-20 07:23:21.049898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-11-20 07:23:21.049931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-11-20 07:23:21.050161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-11-20 07:23:21.050193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-11-20 07:23:21.050391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-11-20 07:23:21.050423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-11-20 07:23:21.050687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-11-20 07:23:21.050719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-11-20 07:23:21.050918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-11-20 07:23:21.050957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-11-20 07:23:21.051172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-11-20 07:23:21.051204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-11-20 07:23:21.051431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-11-20 07:23:21.051463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-11-20 07:23:21.051721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-11-20 07:23:21.051758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-11-20 07:23:21.052020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-11-20 07:23:21.052053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-11-20 07:23:21.052283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-11-20 07:23:21.052315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-11-20 07:23:21.052605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-11-20 07:23:21.052638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-11-20 07:23:21.052886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-11-20 07:23:21.052918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.713 [2024-11-20 07:23:21.053209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.713 [2024-11-20 07:23:21.053243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.713 qpair failed and we were unable to recover it. 00:27:16.714 [2024-11-20 07:23:21.053463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-11-20 07:23:21.053495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-11-20 07:23:21.053741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-11-20 07:23:21.053772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-11-20 07:23:21.054045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-11-20 07:23:21.054078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-11-20 07:23:21.054294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-11-20 07:23:21.054327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-11-20 07:23:21.054439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-11-20 07:23:21.054469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-11-20 07:23:21.054785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-11-20 07:23:21.054817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-11-20 07:23:21.055062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-11-20 07:23:21.055096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-11-20 07:23:21.055291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-11-20 07:23:21.055322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-11-20 07:23:21.055579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-11-20 07:23:21.055612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-11-20 07:23:21.055812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-11-20 07:23:21.055844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-11-20 07:23:21.056119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-11-20 07:23:21.056153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-11-20 07:23:21.056408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-11-20 07:23:21.056440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-11-20 07:23:21.056733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-11-20 07:23:21.056763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-11-20 07:23:21.056986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-11-20 07:23:21.057019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-11-20 07:23:21.057316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-11-20 07:23:21.057347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-11-20 07:23:21.057604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-11-20 07:23:21.057636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-11-20 07:23:21.057790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-11-20 07:23:21.057822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-11-20 07:23:21.058075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-11-20 07:23:21.058108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-11-20 07:23:21.058244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-11-20 07:23:21.058276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-11-20 07:23:21.058584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-11-20 07:23:21.058616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-11-20 07:23:21.058899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-11-20 07:23:21.058932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-11-20 07:23:21.059149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-11-20 07:23:21.059182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-11-20 07:23:21.059454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-11-20 07:23:21.059487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-11-20 07:23:21.059768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-11-20 07:23:21.059800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-11-20 07:23:21.060089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-11-20 07:23:21.060122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-11-20 07:23:21.060400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-11-20 07:23:21.060432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-11-20 07:23:21.060653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-11-20 07:23:21.060686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-11-20 07:23:21.060970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-11-20 07:23:21.061004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-11-20 07:23:21.061271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-11-20 07:23:21.061303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-11-20 07:23:21.061524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-11-20 07:23:21.061555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-11-20 07:23:21.061749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.714 [2024-11-20 07:23:21.061781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.714 qpair failed and we were unable to recover it. 00:27:16.714 [2024-11-20 07:23:21.062025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-11-20 07:23:21.062058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-11-20 07:23:21.062277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-11-20 07:23:21.062309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-11-20 07:23:21.062525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-11-20 07:23:21.062557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-11-20 07:23:21.062679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-11-20 07:23:21.062718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-11-20 07:23:21.062867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-11-20 07:23:21.062899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-11-20 07:23:21.063100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-11-20 07:23:21.063133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-11-20 07:23:21.063322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-11-20 07:23:21.063353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-11-20 07:23:21.063554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-11-20 07:23:21.063586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-11-20 07:23:21.063776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-11-20 07:23:21.063809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-11-20 07:23:21.063983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-11-20 07:23:21.064017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-11-20 07:23:21.064157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-11-20 07:23:21.064189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-11-20 07:23:21.064335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-11-20 07:23:21.064368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-11-20 07:23:21.064621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-11-20 07:23:21.064653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-11-20 07:23:21.064926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-11-20 07:23:21.064968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-11-20 07:23:21.065101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-11-20 07:23:21.065133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-11-20 07:23:21.065268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-11-20 07:23:21.065299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-11-20 07:23:21.065495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-11-20 07:23:21.065527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-11-20 07:23:21.065715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-11-20 07:23:21.065747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-11-20 07:23:21.065964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-11-20 07:23:21.065998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-11-20 07:23:21.066117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-11-20 07:23:21.066150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-11-20 07:23:21.066280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-11-20 07:23:21.066312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-11-20 07:23:21.066510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-11-20 07:23:21.066541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-11-20 07:23:21.066751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-11-20 07:23:21.066782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-11-20 07:23:21.066997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-11-20 07:23:21.067031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-11-20 07:23:21.067250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-11-20 07:23:21.067281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-11-20 07:23:21.067510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-11-20 07:23:21.067541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-11-20 07:23:21.067798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-11-20 07:23:21.067829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-11-20 07:23:21.068088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-11-20 07:23:21.068121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-11-20 07:23:21.068336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-11-20 07:23:21.068368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-11-20 07:23:21.068507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-11-20 07:23:21.068538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-11-20 07:23:21.068738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-11-20 07:23:21.068769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.715 [2024-11-20 07:23:21.069026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.715 [2024-11-20 07:23:21.069059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.715 qpair failed and we were unable to recover it. 00:27:16.716 [2024-11-20 07:23:21.069244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-11-20 07:23:21.069276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-11-20 07:23:21.069479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-11-20 07:23:21.069511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-11-20 07:23:21.069707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-11-20 07:23:21.069739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-11-20 07:23:21.069932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-11-20 07:23:21.069971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-11-20 07:23:21.070105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-11-20 07:23:21.070137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-11-20 07:23:21.070319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-11-20 07:23:21.070351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-11-20 07:23:21.070552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-11-20 07:23:21.070584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-11-20 07:23:21.070836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-11-20 07:23:21.070868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-11-20 07:23:21.071063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-11-20 07:23:21.071097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-11-20 07:23:21.071352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-11-20 07:23:21.071382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-11-20 07:23:21.071575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-11-20 07:23:21.071606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-11-20 07:23:21.071814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-11-20 07:23:21.071852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-11-20 07:23:21.071999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-11-20 07:23:21.072032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-11-20 07:23:21.072158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-11-20 07:23:21.072190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-11-20 07:23:21.072456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-11-20 07:23:21.072488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-11-20 07:23:21.072769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-11-20 07:23:21.072800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-11-20 07:23:21.072929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-11-20 07:23:21.072981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-11-20 07:23:21.073125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-11-20 07:23:21.073157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-11-20 07:23:21.073437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-11-20 07:23:21.073468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-11-20 07:23:21.073664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-11-20 07:23:21.073696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-11-20 07:23:21.073969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-11-20 07:23:21.074002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-11-20 07:23:21.074184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-11-20 07:23:21.074216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-11-20 07:23:21.074496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-11-20 07:23:21.074528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-11-20 07:23:21.074718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-11-20 07:23:21.074751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-11-20 07:23:21.074962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-11-20 07:23:21.074996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-11-20 07:23:21.075116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-11-20 07:23:21.075148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-11-20 07:23:21.075414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-11-20 07:23:21.075447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-11-20 07:23:21.075713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-11-20 07:23:21.075745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-11-20 07:23:21.075882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-11-20 07:23:21.075915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-11-20 07:23:21.076182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-11-20 07:23:21.076216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-11-20 07:23:21.076480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.716 [2024-11-20 07:23:21.076513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.716 qpair failed and we were unable to recover it. 00:27:16.716 [2024-11-20 07:23:21.076723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-11-20 07:23:21.076756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-11-20 07:23:21.076963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-11-20 07:23:21.076997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-11-20 07:23:21.077136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-11-20 07:23:21.077168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-11-20 07:23:21.077362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-11-20 07:23:21.077394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-11-20 07:23:21.077669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-11-20 07:23:21.077701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-11-20 07:23:21.077835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-11-20 07:23:21.077867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-11-20 07:23:21.078121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-11-20 07:23:21.078155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-11-20 07:23:21.078450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-11-20 07:23:21.078483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-11-20 07:23:21.078741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-11-20 07:23:21.078774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-11-20 07:23:21.078904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-11-20 07:23:21.078936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-11-20 07:23:21.079124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-11-20 07:23:21.079157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-11-20 07:23:21.079364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-11-20 07:23:21.079396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-11-20 07:23:21.079577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-11-20 07:23:21.079609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-11-20 07:23:21.079813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-11-20 07:23:21.079846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-11-20 07:23:21.079992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-11-20 07:23:21.080027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-11-20 07:23:21.080225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-11-20 07:23:21.080257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-11-20 07:23:21.080393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-11-20 07:23:21.080425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-11-20 07:23:21.080610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-11-20 07:23:21.080642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-11-20 07:23:21.080777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-11-20 07:23:21.080809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-11-20 07:23:21.080926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-11-20 07:23:21.080966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-11-20 07:23:21.081152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-11-20 07:23:21.081189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-11-20 07:23:21.081327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-11-20 07:23:21.081360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-11-20 07:23:21.081563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-11-20 07:23:21.081596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-11-20 07:23:21.081849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-11-20 07:23:21.081881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-11-20 07:23:21.082082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-11-20 07:23:21.082115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-11-20 07:23:21.082226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-11-20 07:23:21.082258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.717 [2024-11-20 07:23:21.082455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.717 [2024-11-20 07:23:21.082486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.717 qpair failed and we were unable to recover it. 00:27:16.718 [2024-11-20 07:23:21.082734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-11-20 07:23:21.082766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-11-20 07:23:21.082966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-11-20 07:23:21.083001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-11-20 07:23:21.083138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-11-20 07:23:21.083170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-11-20 07:23:21.083299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-11-20 07:23:21.083331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-11-20 07:23:21.083597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-11-20 07:23:21.083629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-11-20 07:23:21.083762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-11-20 07:23:21.083794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-11-20 07:23:21.083915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-11-20 07:23:21.083961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-11-20 07:23:21.084226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-11-20 07:23:21.084258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-11-20 07:23:21.084384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-11-20 07:23:21.084415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-11-20 07:23:21.084664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-11-20 07:23:21.084695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-11-20 07:23:21.084902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-11-20 07:23:21.084934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-11-20 07:23:21.085192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-11-20 07:23:21.085225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-11-20 07:23:21.085357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-11-20 07:23:21.085388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-11-20 07:23:21.085515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-11-20 07:23:21.085547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-11-20 07:23:21.085795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-11-20 07:23:21.085827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-11-20 07:23:21.086094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-11-20 07:23:21.086127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-11-20 07:23:21.086259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-11-20 07:23:21.086291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-11-20 07:23:21.086406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-11-20 07:23:21.086437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-11-20 07:23:21.086611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-11-20 07:23:21.086642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-11-20 07:23:21.086913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-11-20 07:23:21.086945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-11-20 07:23:21.087234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-11-20 07:23:21.087267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-11-20 07:23:21.087395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-11-20 07:23:21.087426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-11-20 07:23:21.087564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-11-20 07:23:21.087596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-11-20 07:23:21.087721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-11-20 07:23:21.087752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-11-20 07:23:21.087884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-11-20 07:23:21.087916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-11-20 07:23:21.088186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-11-20 07:23:21.088264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-11-20 07:23:21.088407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-11-20 07:23:21.088443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-11-20 07:23:21.088660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-11-20 07:23:21.088694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-11-20 07:23:21.088897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-11-20 07:23:21.088929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-11-20 07:23:21.089061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-11-20 07:23:21.089094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-11-20 07:23:21.089277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.718 [2024-11-20 07:23:21.089309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.718 qpair failed and we were unable to recover it. 00:27:16.718 [2024-11-20 07:23:21.089578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-11-20 07:23:21.089610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-11-20 07:23:21.089857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-11-20 07:23:21.089888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-11-20 07:23:21.090042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-11-20 07:23:21.090087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-11-20 07:23:21.090231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-11-20 07:23:21.090264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-11-20 07:23:21.090548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-11-20 07:23:21.090579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-11-20 07:23:21.090759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-11-20 07:23:21.090791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-11-20 07:23:21.091011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-11-20 07:23:21.091044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-11-20 07:23:21.091198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-11-20 07:23:21.091230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-11-20 07:23:21.091419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-11-20 07:23:21.091450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-11-20 07:23:21.091666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-11-20 07:23:21.091698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-11-20 07:23:21.091946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-11-20 07:23:21.091992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-11-20 07:23:21.092118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-11-20 07:23:21.092149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-11-20 07:23:21.092412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-11-20 07:23:21.092445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-11-20 07:23:21.092664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-11-20 07:23:21.092696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-11-20 07:23:21.092901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-11-20 07:23:21.092933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-11-20 07:23:21.093234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-11-20 07:23:21.093267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-11-20 07:23:21.093451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-11-20 07:23:21.093483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-11-20 07:23:21.093694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-11-20 07:23:21.093725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-11-20 07:23:21.093946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-11-20 07:23:21.093993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-11-20 07:23:21.094190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-11-20 07:23:21.094222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-11-20 07:23:21.094477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-11-20 07:23:21.094509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-11-20 07:23:21.094781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-11-20 07:23:21.094812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-11-20 07:23:21.094933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-11-20 07:23:21.094977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-11-20 07:23:21.095210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-11-20 07:23:21.095241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-11-20 07:23:21.095431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-11-20 07:23:21.095463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-11-20 07:23:21.095662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-11-20 07:23:21.095693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-11-20 07:23:21.095867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-11-20 07:23:21.095898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-11-20 07:23:21.096043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-11-20 07:23:21.096076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-11-20 07:23:21.096216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-11-20 07:23:21.096248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-11-20 07:23:21.096514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-11-20 07:23:21.096590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-11-20 07:23:21.096801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-11-20 07:23:21.096837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.719 qpair failed and we were unable to recover it. 00:27:16.719 [2024-11-20 07:23:21.096979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.719 [2024-11-20 07:23:21.097014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-11-20 07:23:21.097159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-11-20 07:23:21.097193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-11-20 07:23:21.097381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-11-20 07:23:21.097414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-11-20 07:23:21.097594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-11-20 07:23:21.097626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-11-20 07:23:21.097742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-11-20 07:23:21.097774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-11-20 07:23:21.097879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-11-20 07:23:21.097912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-11-20 07:23:21.098059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-11-20 07:23:21.098092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-11-20 07:23:21.098270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-11-20 07:23:21.098303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-11-20 07:23:21.098428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-11-20 07:23:21.098459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-11-20 07:23:21.098646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-11-20 07:23:21.098678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-11-20 07:23:21.098877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-11-20 07:23:21.098909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-11-20 07:23:21.099110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-11-20 07:23:21.099154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-11-20 07:23:21.099286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-11-20 07:23:21.099318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-11-20 07:23:21.099425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-11-20 07:23:21.099458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-11-20 07:23:21.099707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-11-20 07:23:21.099739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-11-20 07:23:21.099945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-11-20 07:23:21.099991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-11-20 07:23:21.100169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-11-20 07:23:21.100202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-11-20 07:23:21.100447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-11-20 07:23:21.100480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-11-20 07:23:21.100679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-11-20 07:23:21.100712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-11-20 07:23:21.100973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-11-20 07:23:21.101008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-11-20 07:23:21.101195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-11-20 07:23:21.101227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-11-20 07:23:21.101498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-11-20 07:23:21.101531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-11-20 07:23:21.101723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-11-20 07:23:21.101756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-11-20 07:23:21.102034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-11-20 07:23:21.102068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-11-20 07:23:21.102263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-11-20 07:23:21.102295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-11-20 07:23:21.102433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-11-20 07:23:21.102465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-11-20 07:23:21.102659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-11-20 07:23:21.102690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-11-20 07:23:21.102892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-11-20 07:23:21.102925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-11-20 07:23:21.103208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-11-20 07:23:21.103243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-11-20 07:23:21.103427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-11-20 07:23:21.103459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-11-20 07:23:21.103712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-11-20 07:23:21.103744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-11-20 07:23:21.104046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.720 [2024-11-20 07:23:21.104079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.720 qpair failed and we were unable to recover it. 00:27:16.720 [2024-11-20 07:23:21.104201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-11-20 07:23:21.104233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-11-20 07:23:21.104441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-11-20 07:23:21.104472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-11-20 07:23:21.104677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-11-20 07:23:21.104709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-11-20 07:23:21.104863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-11-20 07:23:21.104894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-11-20 07:23:21.105207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-11-20 07:23:21.105241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-11-20 07:23:21.105437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-11-20 07:23:21.105469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-11-20 07:23:21.105655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-11-20 07:23:21.105688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-11-20 07:23:21.105830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-11-20 07:23:21.105862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-11-20 07:23:21.106076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-11-20 07:23:21.106110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-11-20 07:23:21.106239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-11-20 07:23:21.106271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-11-20 07:23:21.106554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-11-20 07:23:21.106587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-11-20 07:23:21.106768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-11-20 07:23:21.106799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-11-20 07:23:21.106923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-11-20 07:23:21.106962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-11-20 07:23:21.107141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-11-20 07:23:21.107174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-11-20 07:23:21.107449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-11-20 07:23:21.107482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-11-20 07:23:21.107670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-11-20 07:23:21.107703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-11-20 07:23:21.107820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-11-20 07:23:21.107852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-11-20 07:23:21.108035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-11-20 07:23:21.108069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-11-20 07:23:21.108315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-11-20 07:23:21.108347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-11-20 07:23:21.108526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-11-20 07:23:21.108565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-11-20 07:23:21.108820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-11-20 07:23:21.108853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-11-20 07:23:21.109062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-11-20 07:23:21.109095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-11-20 07:23:21.109338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-11-20 07:23:21.109371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-11-20 07:23:21.109642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-11-20 07:23:21.109675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-11-20 07:23:21.109920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-11-20 07:23:21.109963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-11-20 07:23:21.110142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-11-20 07:23:21.110175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-11-20 07:23:21.110370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-11-20 07:23:21.110402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-11-20 07:23:21.110615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-11-20 07:23:21.110647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-11-20 07:23:21.110780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-11-20 07:23:21.110812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-11-20 07:23:21.111079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-11-20 07:23:21.111113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-11-20 07:23:21.111242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.721 [2024-11-20 07:23:21.111274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.721 qpair failed and we were unable to recover it. 00:27:16.721 [2024-11-20 07:23:21.111520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-11-20 07:23:21.111552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-11-20 07:23:21.111675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-11-20 07:23:21.111707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-11-20 07:23:21.111900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-11-20 07:23:21.111932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-11-20 07:23:21.112087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-11-20 07:23:21.112120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-11-20 07:23:21.112322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-11-20 07:23:21.112353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-11-20 07:23:21.112542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-11-20 07:23:21.112574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-11-20 07:23:21.112703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-11-20 07:23:21.112736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-11-20 07:23:21.112859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-11-20 07:23:21.112891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-11-20 07:23:21.113051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-11-20 07:23:21.113084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-11-20 07:23:21.113276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-11-20 07:23:21.113309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-11-20 07:23:21.113582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-11-20 07:23:21.113615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-11-20 07:23:21.113823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-11-20 07:23:21.113855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-11-20 07:23:21.114101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-11-20 07:23:21.114133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-11-20 07:23:21.114260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-11-20 07:23:21.114293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-11-20 07:23:21.114418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-11-20 07:23:21.114449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-11-20 07:23:21.114663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-11-20 07:23:21.114696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-11-20 07:23:21.114815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-11-20 07:23:21.114847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-11-20 07:23:21.115029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-11-20 07:23:21.115062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-11-20 07:23:21.115330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-11-20 07:23:21.115362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-11-20 07:23:21.115509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-11-20 07:23:21.115541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-11-20 07:23:21.115736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-11-20 07:23:21.115767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-11-20 07:23:21.115908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-11-20 07:23:21.115940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-11-20 07:23:21.116169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-11-20 07:23:21.116202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-11-20 07:23:21.116376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-11-20 07:23:21.116407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-11-20 07:23:21.116654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-11-20 07:23:21.116686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-11-20 07:23:21.116884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-11-20 07:23:21.116916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-11-20 07:23:21.117122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-11-20 07:23:21.117157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-11-20 07:23:21.117282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-11-20 07:23:21.117314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-11-20 07:23:21.117453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-11-20 07:23:21.117491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-11-20 07:23:21.117682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-11-20 07:23:21.117714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.722 qpair failed and we were unable to recover it. 00:27:16.722 [2024-11-20 07:23:21.117897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.722 [2024-11-20 07:23:21.117929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-11-20 07:23:21.118058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-11-20 07:23:21.118090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-11-20 07:23:21.118271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-11-20 07:23:21.118303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-11-20 07:23:21.118572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-11-20 07:23:21.118604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-11-20 07:23:21.118801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-11-20 07:23:21.118832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-11-20 07:23:21.119100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-11-20 07:23:21.119133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-11-20 07:23:21.119243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-11-20 07:23:21.119276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-11-20 07:23:21.119488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-11-20 07:23:21.119519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-11-20 07:23:21.119650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-11-20 07:23:21.119683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-11-20 07:23:21.119946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-11-20 07:23:21.119989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-11-20 07:23:21.120114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-11-20 07:23:21.120146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-11-20 07:23:21.120283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-11-20 07:23:21.120316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-11-20 07:23:21.120535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-11-20 07:23:21.120568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-11-20 07:23:21.120708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-11-20 07:23:21.120741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-11-20 07:23:21.120929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-11-20 07:23:21.120983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-11-20 07:23:21.121179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-11-20 07:23:21.121212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-11-20 07:23:21.121491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-11-20 07:23:21.121524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-11-20 07:23:21.121718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-11-20 07:23:21.121751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-11-20 07:23:21.122003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-11-20 07:23:21.122037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-11-20 07:23:21.122305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-11-20 07:23:21.122338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-11-20 07:23:21.122463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-11-20 07:23:21.122495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-11-20 07:23:21.122689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-11-20 07:23:21.122721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-11-20 07:23:21.122972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-11-20 07:23:21.123006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-11-20 07:23:21.123212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-11-20 07:23:21.123245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-11-20 07:23:21.123435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-11-20 07:23:21.123466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-11-20 07:23:21.123718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-11-20 07:23:21.123750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-11-20 07:23:21.123941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-11-20 07:23:21.123985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-11-20 07:23:21.124246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-11-20 07:23:21.124279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-11-20 07:23:21.124490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.723 [2024-11-20 07:23:21.124523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.723 qpair failed and we were unable to recover it. 00:27:16.723 [2024-11-20 07:23:21.124807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-11-20 07:23:21.124838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-11-20 07:23:21.125127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-11-20 07:23:21.125161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-11-20 07:23:21.125344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-11-20 07:23:21.125377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-11-20 07:23:21.125613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-11-20 07:23:21.125646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-11-20 07:23:21.125827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-11-20 07:23:21.125860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-11-20 07:23:21.125982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-11-20 07:23:21.126032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-11-20 07:23:21.126256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-11-20 07:23:21.126289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-11-20 07:23:21.126484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-11-20 07:23:21.126517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-11-20 07:23:21.126634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-11-20 07:23:21.126666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-11-20 07:23:21.126795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-11-20 07:23:21.126827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-11-20 07:23:21.127015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-11-20 07:23:21.127048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-11-20 07:23:21.127171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-11-20 07:23:21.127204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-11-20 07:23:21.127391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-11-20 07:23:21.127423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-11-20 07:23:21.127561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-11-20 07:23:21.127594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-11-20 07:23:21.127730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-11-20 07:23:21.127763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-11-20 07:23:21.127961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-11-20 07:23:21.127994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-11-20 07:23:21.128258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-11-20 07:23:21.128291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-11-20 07:23:21.128468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-11-20 07:23:21.128499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-11-20 07:23:21.128696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-11-20 07:23:21.128729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-11-20 07:23:21.129002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-11-20 07:23:21.129037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-11-20 07:23:21.129172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-11-20 07:23:21.129204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-11-20 07:23:21.129382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-11-20 07:23:21.129416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-11-20 07:23:21.129612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-11-20 07:23:21.129643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-11-20 07:23:21.129904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-11-20 07:23:21.129938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-11-20 07:23:21.130125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-11-20 07:23:21.130158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-11-20 07:23:21.130280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-11-20 07:23:21.130312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-11-20 07:23:21.130507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-11-20 07:23:21.130540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-11-20 07:23:21.130717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-11-20 07:23:21.130750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-11-20 07:23:21.130940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-11-20 07:23:21.130984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-11-20 07:23:21.131226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-11-20 07:23:21.131257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-11-20 07:23:21.131488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-11-20 07:23:21.131520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.724 [2024-11-20 07:23:21.131627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.724 [2024-11-20 07:23:21.131659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.724 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 07:23:21.131837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 07:23:21.131869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 07:23:21.132093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 07:23:21.132126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 07:23:21.132313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 07:23:21.132346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 07:23:21.132619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 07:23:21.132653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 07:23:21.132775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 07:23:21.132813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 07:23:21.133081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 07:23:21.133114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 07:23:21.133299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 07:23:21.133331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 07:23:21.133533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 07:23:21.133566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 07:23:21.133773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 07:23:21.133805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 07:23:21.133982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 07:23:21.134016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 07:23:21.134213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 07:23:21.134245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 07:23:21.134510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 07:23:21.134542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 07:23:21.134678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 07:23:21.134710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 07:23:21.134851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 07:23:21.134884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 07:23:21.135004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 07:23:21.135038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 07:23:21.135235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 07:23:21.135267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 07:23:21.135462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 07:23:21.135492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 07:23:21.135602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 07:23:21.135633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 07:23:21.135841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 07:23:21.135872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 07:23:21.136117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 07:23:21.136152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 07:23:21.136412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 07:23:21.136445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 07:23:21.136615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 07:23:21.136647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 07:23:21.136844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 07:23:21.136876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 07:23:21.137087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 07:23:21.137122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 07:23:21.137237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 07:23:21.137269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 07:23:21.137514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 07:23:21.137546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 07:23:21.137736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 07:23:21.137769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 07:23:21.137955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 07:23:21.137989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 07:23:21.138233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 07:23:21.138265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 07:23:21.138391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 07:23:21.138423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 07:23:21.138530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.725 [2024-11-20 07:23:21.138563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.725 qpair failed and we were unable to recover it. 00:27:16.725 [2024-11-20 07:23:21.138755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 07:23:21.138787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 07:23:21.139060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 07:23:21.139093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 07:23:21.139277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 07:23:21.139310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 07:23:21.139568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 07:23:21.139601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 07:23:21.139794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 07:23:21.139826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 07:23:21.140108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 07:23:21.140142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 07:23:21.140336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 07:23:21.140368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 07:23:21.140649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 07:23:21.140682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 07:23:21.140881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 07:23:21.140914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 07:23:21.141117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 07:23:21.141190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 07:23:21.141355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 07:23:21.141392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 07:23:21.141514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 07:23:21.141546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 07:23:21.141739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 07:23:21.141771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 07:23:21.141973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 07:23:21.142018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 07:23:21.142307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 07:23:21.142340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 07:23:21.142470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 07:23:21.142501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 07:23:21.142677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 07:23:21.142710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 07:23:21.142846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 07:23:21.142879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 07:23:21.143003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 07:23:21.143037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 07:23:21.143235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 07:23:21.143267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 07:23:21.143533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 07:23:21.143565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 07:23:21.143747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 07:23:21.143778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 07:23:21.143970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 07:23:21.144004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 07:23:21.144115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 07:23:21.144148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 07:23:21.144414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 07:23:21.144446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 07:23:21.144678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 07:23:21.144709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 07:23:21.144898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 07:23:21.144931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 07:23:21.145191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 07:23:21.145225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 07:23:21.145425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 07:23:21.145457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 07:23:21.145595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 07:23:21.145627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 07:23:21.145741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 07:23:21.145773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.726 [2024-11-20 07:23:21.146013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.726 [2024-11-20 07:23:21.146047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.726 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 07:23:21.146334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 07:23:21.146367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 07:23:21.146549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 07:23:21.146581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 07:23:21.146752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 07:23:21.146784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 07:23:21.146904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 07:23:21.146936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 07:23:21.147144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 07:23:21.147177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 07:23:21.147309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 07:23:21.147342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 07:23:21.147584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 07:23:21.147615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 07:23:21.147809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 07:23:21.147840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 07:23:21.148090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 07:23:21.148124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 07:23:21.148316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 07:23:21.148347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 07:23:21.148620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 07:23:21.148652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 07:23:21.148921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 07:23:21.148963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 07:23:21.149083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 07:23:21.149113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 07:23:21.149300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 07:23:21.149333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 07:23:21.149467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 07:23:21.149499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 07:23:21.149741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 07:23:21.149773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 07:23:21.149894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 07:23:21.149926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 07:23:21.150137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 07:23:21.150171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 07:23:21.150430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 07:23:21.150461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 07:23:21.150708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 07:23:21.150741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 07:23:21.150863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 07:23:21.150894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 07:23:21.151165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 07:23:21.151204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 07:23:21.151399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 07:23:21.151431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 07:23:21.151569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 07:23:21.151602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 07:23:21.151870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 07:23:21.151903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 07:23:21.152101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 07:23:21.152134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 07:23:21.152327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 07:23:21.152360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 07:23:21.152466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 07:23:21.152497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 07:23:21.152690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 07:23:21.152721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 07:23:21.152969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 07:23:21.153002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 07:23:21.153246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.727 [2024-11-20 07:23:21.153278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.727 qpair failed and we were unable to recover it. 00:27:16.727 [2024-11-20 07:23:21.153456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 07:23:21.153489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 07:23:21.153690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 07:23:21.153721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 07:23:21.154010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 07:23:21.154043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 07:23:21.154236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 07:23:21.154269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 07:23:21.154409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 07:23:21.154440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 07:23:21.154630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 07:23:21.154662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 07:23:21.154960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 07:23:21.154994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 07:23:21.155204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 07:23:21.155235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 07:23:21.155367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 07:23:21.155400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 07:23:21.155505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 07:23:21.155539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 07:23:21.155799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 07:23:21.155830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 07:23:21.156009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 07:23:21.156042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 07:23:21.156159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 07:23:21.156191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 07:23:21.156393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 07:23:21.156424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 07:23:21.156605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 07:23:21.156638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 07:23:21.156819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 07:23:21.156852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 07:23:21.157045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 07:23:21.157079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 07:23:21.157279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 07:23:21.157312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 07:23:21.157577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 07:23:21.157608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 07:23:21.157847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 07:23:21.157878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 07:23:21.157992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 07:23:21.158026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 07:23:21.158216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 07:23:21.158248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 07:23:21.158435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 07:23:21.158466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 07:23:21.158704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 07:23:21.158737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 07:23:21.158847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 07:23:21.158880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 07:23:21.159058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 07:23:21.159090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 07:23:21.159301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 07:23:21.159334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.728 [2024-11-20 07:23:21.159444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.728 [2024-11-20 07:23:21.159475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.728 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 07:23:21.159605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 07:23:21.159637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 07:23:21.159832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 07:23:21.159866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 07:23:21.160078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 07:23:21.160117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 07:23:21.160235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 07:23:21.160266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 07:23:21.160529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 07:23:21.160561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 07:23:21.160752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 07:23:21.160784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 07:23:21.160906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 07:23:21.160939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 07:23:21.161183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 07:23:21.161216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 07:23:21.161429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 07:23:21.161463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 07:23:21.161576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 07:23:21.161607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 07:23:21.161797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 07:23:21.161828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 07:23:21.162027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 07:23:21.162061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 07:23:21.162300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 07:23:21.162331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 07:23:21.162513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 07:23:21.162546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 07:23:21.162650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 07:23:21.162680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 07:23:21.162810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 07:23:21.162842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 07:23:21.162968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 07:23:21.163002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 07:23:21.163108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 07:23:21.163140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 07:23:21.163280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 07:23:21.163313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 07:23:21.163479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 07:23:21.163510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 07:23:21.163750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 07:23:21.163782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 07:23:21.164023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 07:23:21.164056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 07:23:21.164194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 07:23:21.164226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 07:23:21.164397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 07:23:21.164429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 07:23:21.164541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 07:23:21.164576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 07:23:21.164790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 07:23:21.164823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 07:23:21.165066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 07:23:21.165098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 07:23:21.165215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 07:23:21.165246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 07:23:21.165369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.729 [2024-11-20 07:23:21.165419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.729 qpair failed and we were unable to recover it. 00:27:16.729 [2024-11-20 07:23:21.165685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 07:23:21.165716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 07:23:21.165909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 07:23:21.165941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 07:23:21.166162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 07:23:21.166196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 07:23:21.166391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 07:23:21.166423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 07:23:21.166610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 07:23:21.166641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 07:23:21.166882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 07:23:21.166914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 07:23:21.167059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 07:23:21.167092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 07:23:21.167212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 07:23:21.167244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 07:23:21.167372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 07:23:21.167405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 07:23:21.167590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 07:23:21.167623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 07:23:21.167864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 07:23:21.167894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 07:23:21.168078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 07:23:21.168111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 07:23:21.168245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 07:23:21.168277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 07:23:21.168470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 07:23:21.168507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 07:23:21.168661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 07:23:21.168693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 07:23:21.168880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 07:23:21.168913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 07:23:21.169192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 07:23:21.169226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 07:23:21.169401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 07:23:21.169434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 07:23:21.169572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 07:23:21.169604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 07:23:21.169770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 07:23:21.169802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 07:23:21.170045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 07:23:21.170079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 07:23:21.170188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 07:23:21.170220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 07:23:21.170414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 07:23:21.170444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 07:23:21.170571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 07:23:21.170602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 07:23:21.170779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 07:23:21.170811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 07:23:21.171009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 07:23:21.171042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 07:23:21.171213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 07:23:21.171245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 07:23:21.171426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 07:23:21.171459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 07:23:21.171569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 07:23:21.171601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 07:23:21.171736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 07:23:21.171767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.730 [2024-11-20 07:23:21.171990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.730 [2024-11-20 07:23:21.172023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.730 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 07:23:21.172196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 07:23:21.172228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 07:23:21.172492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 07:23:21.172524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 07:23:21.172631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 07:23:21.172662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 07:23:21.172900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 07:23:21.172933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 07:23:21.173079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 07:23:21.173110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 07:23:21.173307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 07:23:21.173339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 07:23:21.173445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 07:23:21.173476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 07:23:21.173591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 07:23:21.173622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 07:23:21.173906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 07:23:21.173936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 07:23:21.174126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 07:23:21.174159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 07:23:21.174346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 07:23:21.174378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 07:23:21.174556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 07:23:21.174588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 07:23:21.174716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 07:23:21.174746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 07:23:21.174985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 07:23:21.175020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 07:23:21.175209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 07:23:21.175241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 07:23:21.175430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 07:23:21.175462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 07:23:21.175597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 07:23:21.175629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 07:23:21.175876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 07:23:21.175907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 07:23:21.176073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 07:23:21.176106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 07:23:21.176257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 07:23:21.176290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 07:23:21.176526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 07:23:21.176558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 07:23:21.176754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 07:23:21.176786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 07:23:21.177031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 07:23:21.177071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 07:23:21.177292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 07:23:21.177324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 07:23:21.177459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 07:23:21.177492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 07:23:21.177684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 07:23:21.177715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 07:23:21.177979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 07:23:21.178013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 07:23:21.178226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 07:23:21.178257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 07:23:21.178514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 07:23:21.178547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 07:23:21.178812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 07:23:21.178844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.731 [2024-11-20 07:23:21.179054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.731 [2024-11-20 07:23:21.179088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.731 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 07:23:21.179222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 07:23:21.179254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 07:23:21.179427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 07:23:21.179459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 07:23:21.179636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 07:23:21.179668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 07:23:21.179798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 07:23:21.179830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 07:23:21.179939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 07:23:21.179981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 07:23:21.180138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 07:23:21.180170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 07:23:21.180277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 07:23:21.180309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 07:23:21.180489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 07:23:21.180521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 07:23:21.180781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 07:23:21.180813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 07:23:21.180928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 07:23:21.180985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 07:23:21.181092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 07:23:21.181123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 07:23:21.181354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 07:23:21.181385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 07:23:21.181570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 07:23:21.181602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 07:23:21.181729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 07:23:21.181760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 07:23:21.181885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 07:23:21.181917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 07:23:21.182099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 07:23:21.182132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 07:23:21.182247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 07:23:21.182281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 07:23:21.182472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 07:23:21.182504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 07:23:21.182719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 07:23:21.182751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 07:23:21.182871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 07:23:21.182904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 07:23:21.183118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 07:23:21.183150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 07:23:21.183387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 07:23:21.183417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 07:23:21.183655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 07:23:21.183686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 07:23:21.183860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 07:23:21.183891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 07:23:21.184090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 07:23:21.184122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 07:23:21.184328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 07:23:21.184360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 07:23:21.184624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 07:23:21.184656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 07:23:21.184898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 07:23:21.184929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 07:23:21.185087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 07:23:21.185119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 07:23:21.185339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 07:23:21.185370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 07:23:21.185551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.732 [2024-11-20 07:23:21.185582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.732 qpair failed and we were unable to recover it. 00:27:16.732 [2024-11-20 07:23:21.185861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 07:23:21.185899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 07:23:21.186098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 07:23:21.186131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 07:23:21.186239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 07:23:21.186270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 07:23:21.186441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 07:23:21.186472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 07:23:21.186719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 07:23:21.186750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 07:23:21.186932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 07:23:21.186975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 07:23:21.187169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 07:23:21.187199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 07:23:21.187333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 07:23:21.187364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 07:23:21.187533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 07:23:21.187564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 07:23:21.187736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 07:23:21.187767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 07:23:21.187874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 07:23:21.187905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 07:23:21.188120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 07:23:21.188152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 07:23:21.188346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 07:23:21.188378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 07:23:21.188585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 07:23:21.188617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 07:23:21.188862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 07:23:21.188894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 07:23:21.189027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 07:23:21.189059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 07:23:21.189313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 07:23:21.189343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 07:23:21.189446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 07:23:21.189476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 07:23:21.189716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 07:23:21.189748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 07:23:21.189917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 07:23:21.189958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 07:23:21.190150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 07:23:21.190182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 07:23:21.190449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 07:23:21.190481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 07:23:21.190657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 07:23:21.190689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 07:23:21.190872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 07:23:21.190903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 07:23:21.191129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 07:23:21.191161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 07:23:21.191344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 07:23:21.191376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 07:23:21.191516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 07:23:21.191547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 07:23:21.191656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 07:23:21.191689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 07:23:21.191900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 07:23:21.191932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 07:23:21.192126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 07:23:21.192157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 07:23:21.192349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 07:23:21.192381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.733 [2024-11-20 07:23:21.192560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.733 [2024-11-20 07:23:21.192592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.733 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 07:23:21.192862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 07:23:21.192893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 07:23:21.193094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 07:23:21.193127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 07:23:21.193307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 07:23:21.193339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 07:23:21.193457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 07:23:21.193488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 07:23:21.193608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 07:23:21.193638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 07:23:21.193831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 07:23:21.193863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 07:23:21.193982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 07:23:21.194015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 07:23:21.194120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 07:23:21.194151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 07:23:21.194391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 07:23:21.194429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 07:23:21.194551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 07:23:21.194582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 07:23:21.194776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 07:23:21.194808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 07:23:21.194929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 07:23:21.194970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 07:23:21.195146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 07:23:21.195178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 07:23:21.195278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 07:23:21.195309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 07:23:21.195433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 07:23:21.195464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 07:23:21.195726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 07:23:21.195757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 07:23:21.195955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 07:23:21.195988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 07:23:21.196230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 07:23:21.196261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 07:23:21.196461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 07:23:21.196493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 07:23:21.196770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 07:23:21.196802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 07:23:21.197076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 07:23:21.197109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 07:23:21.197372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 07:23:21.197403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 07:23:21.197526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 07:23:21.197557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 07:23:21.197746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 07:23:21.197778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 07:23:21.197903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 07:23:21.197936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 07:23:21.198139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 07:23:21.198172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.734 [2024-11-20 07:23:21.198422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.734 [2024-11-20 07:23:21.198453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.734 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 07:23:21.198625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 07:23:21.198656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 07:23:21.198779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 07:23:21.198811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 07:23:21.198985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 07:23:21.199017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 07:23:21.199192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 07:23:21.199224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 07:23:21.199406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 07:23:21.199439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 07:23:21.199631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 07:23:21.199663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 07:23:21.199783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 07:23:21.199815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 07:23:21.199991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 07:23:21.200025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 07:23:21.200229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 07:23:21.200261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 07:23:21.200503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 07:23:21.200535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 07:23:21.200716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 07:23:21.200748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 07:23:21.200865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 07:23:21.200896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 07:23:21.201112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 07:23:21.201145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 07:23:21.201406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 07:23:21.201438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 07:23:21.201704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 07:23:21.201735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 07:23:21.201931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 07:23:21.201974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 07:23:21.202166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 07:23:21.202198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 07:23:21.202376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 07:23:21.202407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 07:23:21.202597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 07:23:21.202629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 07:23:21.202893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 07:23:21.202925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 07:23:21.203167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 07:23:21.203199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 07:23:21.203313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 07:23:21.203351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 07:23:21.203453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 07:23:21.203484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 07:23:21.203599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 07:23:21.203631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 07:23:21.203730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 07:23:21.203762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 07:23:21.203893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 07:23:21.203925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 07:23:21.204066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 07:23:21.204098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 07:23:21.204280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 07:23:21.204312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 07:23:21.204411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 07:23:21.204443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 07:23:21.204690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 07:23:21.204722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.735 qpair failed and we were unable to recover it. 00:27:16.735 [2024-11-20 07:23:21.204969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.735 [2024-11-20 07:23:21.205003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 07:23:21.205125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 07:23:21.205156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 07:23:21.205394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 07:23:21.205426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 07:23:21.205613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 07:23:21.205645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 07:23:21.205768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 07:23:21.205798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 07:23:21.206020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 07:23:21.206053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 07:23:21.206159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 07:23:21.206190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 07:23:21.206310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 07:23:21.206342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 07:23:21.206528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 07:23:21.206560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 07:23:21.206686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 07:23:21.206718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 07:23:21.206966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 07:23:21.207000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 07:23:21.207257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 07:23:21.207288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 07:23:21.207554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 07:23:21.207586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 07:23:21.207791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 07:23:21.207822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 07:23:21.208029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 07:23:21.208061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 07:23:21.208250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 07:23:21.208282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 07:23:21.208529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 07:23:21.208561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 07:23:21.208812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 07:23:21.208843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 07:23:21.209026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 07:23:21.209060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 07:23:21.209175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 07:23:21.209207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 07:23:21.209421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 07:23:21.209452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 07:23:21.209576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 07:23:21.209607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 07:23:21.209794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 07:23:21.209826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 07:23:21.209977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 07:23:21.210011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 07:23:21.210269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 07:23:21.210300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 07:23:21.210431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 07:23:21.210463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 07:23:21.210721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 07:23:21.210754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 07:23:21.210968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 07:23:21.211001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 07:23:21.211114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 07:23:21.211146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 07:23:21.211356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 07:23:21.211388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 07:23:21.211489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.736 [2024-11-20 07:23:21.211520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.736 qpair failed and we were unable to recover it. 00:27:16.736 [2024-11-20 07:23:21.211640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 07:23:21.211679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 07:23:21.211815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 07:23:21.211847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 07:23:21.212025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 07:23:21.212058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 07:23:21.212247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 07:23:21.212279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 07:23:21.212502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 07:23:21.212535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 07:23:21.212641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 07:23:21.212672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 07:23:21.212838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 07:23:21.212870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 07:23:21.213005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 07:23:21.213039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 07:23:21.213224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 07:23:21.213256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 07:23:21.213432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 07:23:21.213463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 07:23:21.213666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 07:23:21.213698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 07:23:21.213872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 07:23:21.213903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 07:23:21.214114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 07:23:21.214147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 07:23:21.214280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 07:23:21.214313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 07:23:21.214536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 07:23:21.214567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 07:23:21.214858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 07:23:21.214890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 07:23:21.215018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 07:23:21.215052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 07:23:21.215175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 07:23:21.215206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 07:23:21.215441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 07:23:21.215473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 07:23:21.215643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 07:23:21.215674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 07:23:21.215843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 07:23:21.215875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 07:23:21.216112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 07:23:21.216146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 07:23:21.216360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 07:23:21.216392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 07:23:21.216714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 07:23:21.216746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 07:23:21.216867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 07:23:21.216899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 07:23:21.217034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 07:23:21.217066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 07:23:21.217241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 07:23:21.217273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 07:23:21.217605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 07:23:21.217677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 07:23:21.217825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 07:23:21.217863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 07:23:21.218109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 07:23:21.218145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 07:23:21.218358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 07:23:21.218392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 07:23:21.218583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 07:23:21.218615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 07:23:21.218811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 07:23:21.218843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 07:23:21.219022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.737 [2024-11-20 07:23:21.219055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.737 qpair failed and we were unable to recover it. 00:27:16.737 [2024-11-20 07:23:21.219167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 07:23:21.219199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 07:23:21.219438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 07:23:21.219471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 07:23:21.219652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 07:23:21.219685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 07:23:21.219876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 07:23:21.219909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 07:23:21.220045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 07:23:21.220079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 07:23:21.220305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 07:23:21.220338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 07:23:21.220522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 07:23:21.220564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 07:23:21.220755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 07:23:21.220787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 07:23:21.220972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 07:23:21.221006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 07:23:21.221183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 07:23:21.221215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 07:23:21.221355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 07:23:21.221387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 07:23:21.221654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 07:23:21.221686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 07:23:21.221878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 07:23:21.221910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 07:23:21.222105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 07:23:21.222138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 07:23:21.222250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 07:23:21.222282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 07:23:21.222467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 07:23:21.222498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 07:23:21.222687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 07:23:21.222719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 07:23:21.222851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 07:23:21.222884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 07:23:21.223066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 07:23:21.223098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 07:23:21.223364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 07:23:21.223397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 07:23:21.223532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 07:23:21.223565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 07:23:21.223694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 07:23:21.223726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 07:23:21.223896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 07:23:21.223928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 07:23:21.224136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 07:23:21.224170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 07:23:21.224278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 07:23:21.224310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:16.738 [2024-11-20 07:23:21.224495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.738 [2024-11-20 07:23:21.224528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:16.738 qpair failed and we were unable to recover it. 00:27:17.023 [2024-11-20 07:23:21.224770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-11-20 07:23:21.224802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-11-20 07:23:21.224978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-11-20 07:23:21.225013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-11-20 07:23:21.225142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-11-20 07:23:21.225173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-11-20 07:23:21.225361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-11-20 07:23:21.225394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-11-20 07:23:21.225599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-11-20 07:23:21.225632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-11-20 07:23:21.225873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-11-20 07:23:21.225906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-11-20 07:23:21.226043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-11-20 07:23:21.226075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-11-20 07:23:21.226387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-11-20 07:23:21.226462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-11-20 07:23:21.226672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-11-20 07:23:21.226710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-11-20 07:23:21.226867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-11-20 07:23:21.226899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-11-20 07:23:21.227182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-11-20 07:23:21.227219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-11-20 07:23:21.227342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-11-20 07:23:21.227376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-11-20 07:23:21.227575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-11-20 07:23:21.227614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-11-20 07:23:21.227736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-11-20 07:23:21.227776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-11-20 07:23:21.228008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-11-20 07:23:21.228042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-11-20 07:23:21.228285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-11-20 07:23:21.228322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-11-20 07:23:21.228572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-11-20 07:23:21.228605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-11-20 07:23:21.228792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-11-20 07:23:21.228824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-11-20 07:23:21.228941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-11-20 07:23:21.228995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-11-20 07:23:21.229117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-11-20 07:23:21.229149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-11-20 07:23:21.229284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-11-20 07:23:21.229316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-11-20 07:23:21.229570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-11-20 07:23:21.229603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-11-20 07:23:21.229747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-11-20 07:23:21.229780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.023 [2024-11-20 07:23:21.229976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.023 [2024-11-20 07:23:21.230010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.023 qpair failed and we were unable to recover it. 00:27:17.024 [2024-11-20 07:23:21.230141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-11-20 07:23:21.230174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-11-20 07:23:21.230285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-11-20 07:23:21.230317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-11-20 07:23:21.230440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-11-20 07:23:21.230471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-11-20 07:23:21.230613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-11-20 07:23:21.230645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-11-20 07:23:21.230772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-11-20 07:23:21.230804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-11-20 07:23:21.231038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-11-20 07:23:21.231071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-11-20 07:23:21.231263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-11-20 07:23:21.231295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-11-20 07:23:21.231424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-11-20 07:23:21.231456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-11-20 07:23:21.231699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-11-20 07:23:21.231730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-11-20 07:23:21.231991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-11-20 07:23:21.232025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-11-20 07:23:21.232237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-11-20 07:23:21.232275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-11-20 07:23:21.232493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-11-20 07:23:21.232525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-11-20 07:23:21.232648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-11-20 07:23:21.232680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-11-20 07:23:21.232860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-11-20 07:23:21.232891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-11-20 07:23:21.233079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-11-20 07:23:21.233111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-11-20 07:23:21.233311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-11-20 07:23:21.233343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-11-20 07:23:21.233507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-11-20 07:23:21.233537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-11-20 07:23:21.233794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-11-20 07:23:21.233825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-11-20 07:23:21.234091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-11-20 07:23:21.234124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-11-20 07:23:21.234249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-11-20 07:23:21.234280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-11-20 07:23:21.234410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-11-20 07:23:21.234441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-11-20 07:23:21.234544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-11-20 07:23:21.234574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-11-20 07:23:21.234756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-11-20 07:23:21.234787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-11-20 07:23:21.234979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-11-20 07:23:21.235011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-11-20 07:23:21.235157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-11-20 07:23:21.235189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-11-20 07:23:21.235307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-11-20 07:23:21.235339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-11-20 07:23:21.235516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-11-20 07:23:21.235547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-11-20 07:23:21.235729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-11-20 07:23:21.235760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-11-20 07:23:21.236003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-11-20 07:23:21.236036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-11-20 07:23:21.236218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-11-20 07:23:21.236249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-11-20 07:23:21.236435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.024 [2024-11-20 07:23:21.236467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.024 qpair failed and we were unable to recover it. 00:27:17.024 [2024-11-20 07:23:21.236655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-11-20 07:23:21.236685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-11-20 07:23:21.236812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-11-20 07:23:21.236843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-11-20 07:23:21.236981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-11-20 07:23:21.237015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-11-20 07:23:21.237141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-11-20 07:23:21.237172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-11-20 07:23:21.237352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-11-20 07:23:21.237384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-11-20 07:23:21.237498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-11-20 07:23:21.237530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-11-20 07:23:21.237709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-11-20 07:23:21.237748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-11-20 07:23:21.237861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-11-20 07:23:21.237892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-11-20 07:23:21.238034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-11-20 07:23:21.238066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-11-20 07:23:21.238255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-11-20 07:23:21.238286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-11-20 07:23:21.238399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-11-20 07:23:21.238430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-11-20 07:23:21.238623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-11-20 07:23:21.238655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-11-20 07:23:21.238768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-11-20 07:23:21.238799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-11-20 07:23:21.238969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-11-20 07:23:21.239002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-11-20 07:23:21.239242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-11-20 07:23:21.239272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-11-20 07:23:21.239390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-11-20 07:23:21.239421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-11-20 07:23:21.239609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-11-20 07:23:21.239641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-11-20 07:23:21.239828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-11-20 07:23:21.239859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-11-20 07:23:21.240048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-11-20 07:23:21.240081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-11-20 07:23:21.240257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-11-20 07:23:21.240289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-11-20 07:23:21.240515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-11-20 07:23:21.240547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-11-20 07:23:21.240655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-11-20 07:23:21.240687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-11-20 07:23:21.240870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-11-20 07:23:21.240902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-11-20 07:23:21.241059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-11-20 07:23:21.241090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-11-20 07:23:21.241210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-11-20 07:23:21.241241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-11-20 07:23:21.241526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-11-20 07:23:21.241558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-11-20 07:23:21.241684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-11-20 07:23:21.241714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-11-20 07:23:21.241906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-11-20 07:23:21.241937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-11-20 07:23:21.242076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-11-20 07:23:21.242108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-11-20 07:23:21.242245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-11-20 07:23:21.242276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-11-20 07:23:21.242387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-11-20 07:23:21.242417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-11-20 07:23:21.242528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.025 [2024-11-20 07:23:21.242559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.025 qpair failed and we were unable to recover it. 00:27:17.025 [2024-11-20 07:23:21.242693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-11-20 07:23:21.242725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-11-20 07:23:21.242903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-11-20 07:23:21.242933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-11-20 07:23:21.243120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-11-20 07:23:21.243152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-11-20 07:23:21.243265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-11-20 07:23:21.243295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-11-20 07:23:21.243485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-11-20 07:23:21.243516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-11-20 07:23:21.243630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-11-20 07:23:21.243662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-11-20 07:23:21.243831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-11-20 07:23:21.243863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-11-20 07:23:21.243986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-11-20 07:23:21.244019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-11-20 07:23:21.244198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-11-20 07:23:21.244228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-11-20 07:23:21.244329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-11-20 07:23:21.244361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-11-20 07:23:21.244469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-11-20 07:23:21.244500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-11-20 07:23:21.244674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-11-20 07:23:21.244705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-11-20 07:23:21.244889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-11-20 07:23:21.244920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-11-20 07:23:21.245149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-11-20 07:23:21.245221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-11-20 07:23:21.245440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-11-20 07:23:21.245477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-11-20 07:23:21.245666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-11-20 07:23:21.245699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-11-20 07:23:21.245900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-11-20 07:23:21.245933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-11-20 07:23:21.246214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-11-20 07:23:21.246247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-11-20 07:23:21.246429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-11-20 07:23:21.246461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-11-20 07:23:21.246649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-11-20 07:23:21.246682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-11-20 07:23:21.246796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-11-20 07:23:21.246828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-11-20 07:23:21.247023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-11-20 07:23:21.247058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-11-20 07:23:21.247185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-11-20 07:23:21.247217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-11-20 07:23:21.247340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-11-20 07:23:21.247373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-11-20 07:23:21.247488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-11-20 07:23:21.247520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-11-20 07:23:21.247626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-11-20 07:23:21.247658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-11-20 07:23:21.247834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-11-20 07:23:21.247867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-11-20 07:23:21.247972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-11-20 07:23:21.248005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-11-20 07:23:21.248144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-11-20 07:23:21.248184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-11-20 07:23:21.248304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-11-20 07:23:21.248336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-11-20 07:23:21.248437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-11-20 07:23:21.248469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.026 [2024-11-20 07:23:21.248648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.026 [2024-11-20 07:23:21.248681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.026 qpair failed and we were unable to recover it. 00:27:17.027 [2024-11-20 07:23:21.248859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-11-20 07:23:21.248892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-11-20 07:23:21.249097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-11-20 07:23:21.249131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-11-20 07:23:21.249305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-11-20 07:23:21.249337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-11-20 07:23:21.249539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-11-20 07:23:21.249570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-11-20 07:23:21.249809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-11-20 07:23:21.249842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-11-20 07:23:21.249972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-11-20 07:23:21.250004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-11-20 07:23:21.250176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-11-20 07:23:21.250208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-11-20 07:23:21.250399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-11-20 07:23:21.250431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-11-20 07:23:21.250705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-11-20 07:23:21.250737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-11-20 07:23:21.250877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-11-20 07:23:21.250908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-11-20 07:23:21.251191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-11-20 07:23:21.251225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-11-20 07:23:21.251341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-11-20 07:23:21.251373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-11-20 07:23:21.251491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-11-20 07:23:21.251523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-11-20 07:23:21.251630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-11-20 07:23:21.251662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-11-20 07:23:21.251859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-11-20 07:23:21.251891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-11-20 07:23:21.252095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-11-20 07:23:21.252128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-11-20 07:23:21.252368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-11-20 07:23:21.252400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-11-20 07:23:21.252537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-11-20 07:23:21.252568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-11-20 07:23:21.252745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-11-20 07:23:21.252777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-11-20 07:23:21.252983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-11-20 07:23:21.253016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-11-20 07:23:21.253200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-11-20 07:23:21.253233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-11-20 07:23:21.253346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-11-20 07:23:21.253377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-11-20 07:23:21.253618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-11-20 07:23:21.253650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-11-20 07:23:21.253763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-11-20 07:23:21.253798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-11-20 07:23:21.253927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-11-20 07:23:21.253967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-11-20 07:23:21.254139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-11-20 07:23:21.254169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-11-20 07:23:21.254283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-11-20 07:23:21.254315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-11-20 07:23:21.254503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-11-20 07:23:21.254533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-11-20 07:23:21.254770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-11-20 07:23:21.254802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-11-20 07:23:21.254925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-11-20 07:23:21.254964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-11-20 07:23:21.255086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.027 [2024-11-20 07:23:21.255117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.027 qpair failed and we were unable to recover it. 00:27:17.027 [2024-11-20 07:23:21.255293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-11-20 07:23:21.255323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-11-20 07:23:21.255430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-11-20 07:23:21.255461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-11-20 07:23:21.255588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-11-20 07:23:21.255619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-11-20 07:23:21.255722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-11-20 07:23:21.255753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-11-20 07:23:21.255970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-11-20 07:23:21.256003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-11-20 07:23:21.256197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-11-20 07:23:21.256229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-11-20 07:23:21.256430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-11-20 07:23:21.256463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-11-20 07:23:21.256576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-11-20 07:23:21.256608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-11-20 07:23:21.256710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-11-20 07:23:21.256742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-11-20 07:23:21.256855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-11-20 07:23:21.256887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-11-20 07:23:21.257104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-11-20 07:23:21.257138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-11-20 07:23:21.257310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-11-20 07:23:21.257342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-11-20 07:23:21.257579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-11-20 07:23:21.257611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-11-20 07:23:21.257806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-11-20 07:23:21.257838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-11-20 07:23:21.257976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-11-20 07:23:21.258010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-11-20 07:23:21.258127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-11-20 07:23:21.258157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-11-20 07:23:21.258393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-11-20 07:23:21.258425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-11-20 07:23:21.258535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-11-20 07:23:21.258566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-11-20 07:23:21.258669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-11-20 07:23:21.258700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-11-20 07:23:21.258885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-11-20 07:23:21.258923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-11-20 07:23:21.259109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-11-20 07:23:21.259142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-11-20 07:23:21.259326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-11-20 07:23:21.259356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-11-20 07:23:21.259533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-11-20 07:23:21.259564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-11-20 07:23:21.259761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-11-20 07:23:21.259792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-11-20 07:23:21.259926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-11-20 07:23:21.259965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-11-20 07:23:21.260084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-11-20 07:23:21.260114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.028 [2024-11-20 07:23:21.260290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.028 [2024-11-20 07:23:21.260322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.028 qpair failed and we were unable to recover it. 00:27:17.029 [2024-11-20 07:23:21.260520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-11-20 07:23:21.260551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-11-20 07:23:21.260758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-11-20 07:23:21.260789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-11-20 07:23:21.261031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-11-20 07:23:21.261063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-11-20 07:23:21.261248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-11-20 07:23:21.261280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-11-20 07:23:21.261387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-11-20 07:23:21.261417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-11-20 07:23:21.261537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-11-20 07:23:21.261569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-11-20 07:23:21.261745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-11-20 07:23:21.261776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-11-20 07:23:21.261909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-11-20 07:23:21.261941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-11-20 07:23:21.262138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-11-20 07:23:21.262170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-11-20 07:23:21.262343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-11-20 07:23:21.262374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-11-20 07:23:21.262553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-11-20 07:23:21.262584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-11-20 07:23:21.262701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-11-20 07:23:21.262732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-11-20 07:23:21.262919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-11-20 07:23:21.262958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-11-20 07:23:21.263100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-11-20 07:23:21.263131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-11-20 07:23:21.263242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-11-20 07:23:21.263273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-11-20 07:23:21.263397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-11-20 07:23:21.263427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-11-20 07:23:21.263558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-11-20 07:23:21.263589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-11-20 07:23:21.263760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-11-20 07:23:21.263792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-11-20 07:23:21.263971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-11-20 07:23:21.264004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-11-20 07:23:21.264241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-11-20 07:23:21.264279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-11-20 07:23:21.264452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-11-20 07:23:21.264483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-11-20 07:23:21.264616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-11-20 07:23:21.264647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-11-20 07:23:21.264886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-11-20 07:23:21.264917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-11-20 07:23:21.265105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-11-20 07:23:21.265136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-11-20 07:23:21.265254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-11-20 07:23:21.265286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-11-20 07:23:21.265467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-11-20 07:23:21.265498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-11-20 07:23:21.265679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-11-20 07:23:21.265712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-11-20 07:23:21.265885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-11-20 07:23:21.265916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-11-20 07:23:21.266033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-11-20 07:23:21.266066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-11-20 07:23:21.266185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-11-20 07:23:21.266215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-11-20 07:23:21.266328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-11-20 07:23:21.266358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.029 [2024-11-20 07:23:21.266469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.029 [2024-11-20 07:23:21.266500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.029 qpair failed and we were unable to recover it. 00:27:17.030 [2024-11-20 07:23:21.266673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-11-20 07:23:21.266705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-11-20 07:23:21.266892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-11-20 07:23:21.266924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-11-20 07:23:21.267037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-11-20 07:23:21.267068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-11-20 07:23:21.267262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-11-20 07:23:21.267292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-11-20 07:23:21.267534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-11-20 07:23:21.267565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-11-20 07:23:21.267677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-11-20 07:23:21.267708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-11-20 07:23:21.267881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-11-20 07:23:21.267912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-11-20 07:23:21.268044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-11-20 07:23:21.268078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-11-20 07:23:21.268340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-11-20 07:23:21.268372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-11-20 07:23:21.268606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-11-20 07:23:21.268637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-11-20 07:23:21.268833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-11-20 07:23:21.268864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-11-20 07:23:21.268985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-11-20 07:23:21.269018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-11-20 07:23:21.269261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-11-20 07:23:21.269292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-11-20 07:23:21.269532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-11-20 07:23:21.269564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-11-20 07:23:21.269734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-11-20 07:23:21.269770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-11-20 07:23:21.269889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-11-20 07:23:21.269920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-11-20 07:23:21.270203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-11-20 07:23:21.270235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-11-20 07:23:21.270358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-11-20 07:23:21.270390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-11-20 07:23:21.270562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-11-20 07:23:21.270592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-11-20 07:23:21.270715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-11-20 07:23:21.270746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-11-20 07:23:21.270962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-11-20 07:23:21.270994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-11-20 07:23:21.271198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-11-20 07:23:21.271229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-11-20 07:23:21.271395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-11-20 07:23:21.271427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-11-20 07:23:21.271614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-11-20 07:23:21.271646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-11-20 07:23:21.271852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-11-20 07:23:21.271884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-11-20 07:23:21.272025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-11-20 07:23:21.272057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-11-20 07:23:21.272237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-11-20 07:23:21.272269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-11-20 07:23:21.272466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-11-20 07:23:21.272498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-11-20 07:23:21.272689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-11-20 07:23:21.272721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-11-20 07:23:21.272989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-11-20 07:23:21.273022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-11-20 07:23:21.273236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-11-20 07:23:21.273269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.030 qpair failed and we were unable to recover it. 00:27:17.030 [2024-11-20 07:23:21.273388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.030 [2024-11-20 07:23:21.273418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-11-20 07:23:21.273588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-11-20 07:23:21.273620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-11-20 07:23:21.273727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-11-20 07:23:21.273757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-11-20 07:23:21.274050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-11-20 07:23:21.274083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-11-20 07:23:21.274271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-11-20 07:23:21.274301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-11-20 07:23:21.274475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-11-20 07:23:21.274506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-11-20 07:23:21.274613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-11-20 07:23:21.274644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-11-20 07:23:21.274819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-11-20 07:23:21.274850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-11-20 07:23:21.275027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-11-20 07:23:21.275060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-11-20 07:23:21.275299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-11-20 07:23:21.275330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-11-20 07:23:21.275496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-11-20 07:23:21.275527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-11-20 07:23:21.275805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-11-20 07:23:21.275836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-11-20 07:23:21.275972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-11-20 07:23:21.276005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-11-20 07:23:21.276220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-11-20 07:23:21.276251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-11-20 07:23:21.276425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-11-20 07:23:21.276455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-11-20 07:23:21.276594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-11-20 07:23:21.276625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-11-20 07:23:21.276810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-11-20 07:23:21.276840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-11-20 07:23:21.277027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-11-20 07:23:21.277060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-11-20 07:23:21.277253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-11-20 07:23:21.277286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-11-20 07:23:21.277559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-11-20 07:23:21.277590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-11-20 07:23:21.277776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-11-20 07:23:21.277807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-11-20 07:23:21.277921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-11-20 07:23:21.277961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-11-20 07:23:21.278098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-11-20 07:23:21.278129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-11-20 07:23:21.278309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-11-20 07:23:21.278340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-11-20 07:23:21.278571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-11-20 07:23:21.278642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-11-20 07:23:21.278878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-11-20 07:23:21.278914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-11-20 07:23:21.279176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-11-20 07:23:21.279210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-11-20 07:23:21.279390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-11-20 07:23:21.279422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-11-20 07:23:21.279597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-11-20 07:23:21.279628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-11-20 07:23:21.279838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-11-20 07:23:21.279870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-11-20 07:23:21.280001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-11-20 07:23:21.280035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-11-20 07:23:21.280218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.031 [2024-11-20 07:23:21.280249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.031 qpair failed and we were unable to recover it. 00:27:17.031 [2024-11-20 07:23:21.280380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-11-20 07:23:21.280411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-11-20 07:23:21.280645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-11-20 07:23:21.280676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-11-20 07:23:21.280940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-11-20 07:23:21.280981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-11-20 07:23:21.281197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-11-20 07:23:21.281228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-11-20 07:23:21.281420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-11-20 07:23:21.281452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-11-20 07:23:21.281634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-11-20 07:23:21.281675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-11-20 07:23:21.281849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-11-20 07:23:21.281880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-11-20 07:23:21.282131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-11-20 07:23:21.282165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-11-20 07:23:21.282404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-11-20 07:23:21.282435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-11-20 07:23:21.282545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-11-20 07:23:21.282576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-11-20 07:23:21.282777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-11-20 07:23:21.282808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-11-20 07:23:21.283072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-11-20 07:23:21.283106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-11-20 07:23:21.283286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-11-20 07:23:21.283317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-11-20 07:23:21.283506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-11-20 07:23:21.283537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-11-20 07:23:21.283653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-11-20 07:23:21.283685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-11-20 07:23:21.283822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-11-20 07:23:21.283854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-11-20 07:23:21.284026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-11-20 07:23:21.284059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-11-20 07:23:21.284295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-11-20 07:23:21.284327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-11-20 07:23:21.284516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-11-20 07:23:21.284548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-11-20 07:23:21.284675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-11-20 07:23:21.284707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-11-20 07:23:21.284816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-11-20 07:23:21.284848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-11-20 07:23:21.285038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-11-20 07:23:21.285071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-11-20 07:23:21.285276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-11-20 07:23:21.285308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-11-20 07:23:21.285570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-11-20 07:23:21.285601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-11-20 07:23:21.285720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-11-20 07:23:21.285750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-11-20 07:23:21.285988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-11-20 07:23:21.286022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-11-20 07:23:21.286188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-11-20 07:23:21.286220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-11-20 07:23:21.286430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-11-20 07:23:21.286462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-11-20 07:23:21.286726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-11-20 07:23:21.286757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-11-20 07:23:21.286929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-11-20 07:23:21.286982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-11-20 07:23:21.287210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-11-20 07:23:21.287241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.032 [2024-11-20 07:23:21.287431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.032 [2024-11-20 07:23:21.287463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.032 qpair failed and we were unable to recover it. 00:27:17.033 [2024-11-20 07:23:21.287700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-11-20 07:23:21.287771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-11-20 07:23:21.287985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-11-20 07:23:21.288022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-11-20 07:23:21.288154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-11-20 07:23:21.288187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-11-20 07:23:21.288433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-11-20 07:23:21.288466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-11-20 07:23:21.288646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-11-20 07:23:21.288678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-11-20 07:23:21.288808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-11-20 07:23:21.288840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-11-20 07:23:21.289054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-11-20 07:23:21.289088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-11-20 07:23:21.289273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-11-20 07:23:21.289305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-11-20 07:23:21.289494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-11-20 07:23:21.289526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-11-20 07:23:21.289699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-11-20 07:23:21.289732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-11-20 07:23:21.289917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-11-20 07:23:21.289958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-11-20 07:23:21.290070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-11-20 07:23:21.290102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-11-20 07:23:21.290210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-11-20 07:23:21.290244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-11-20 07:23:21.290355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-11-20 07:23:21.290387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-11-20 07:23:21.290515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-11-20 07:23:21.290548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-11-20 07:23:21.290757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-11-20 07:23:21.290790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-11-20 07:23:21.291029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-11-20 07:23:21.291061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-11-20 07:23:21.291248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-11-20 07:23:21.291280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-11-20 07:23:21.291402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-11-20 07:23:21.291435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-11-20 07:23:21.291693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-11-20 07:23:21.291726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-11-20 07:23:21.291831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-11-20 07:23:21.291863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-11-20 07:23:21.292044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-11-20 07:23:21.292077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-11-20 07:23:21.292265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-11-20 07:23:21.292297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-11-20 07:23:21.292494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-11-20 07:23:21.292527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-11-20 07:23:21.292633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-11-20 07:23:21.292664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-11-20 07:23:21.292890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-11-20 07:23:21.292922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-11-20 07:23:21.293116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-11-20 07:23:21.293149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-11-20 07:23:21.293341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-11-20 07:23:21.293373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-11-20 07:23:21.293641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-11-20 07:23:21.293674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-11-20 07:23:21.293892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-11-20 07:23:21.293923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-11-20 07:23:21.294123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.033 [2024-11-20 07:23:21.294156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.033 qpair failed and we were unable to recover it. 00:27:17.033 [2024-11-20 07:23:21.294444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-11-20 07:23:21.294475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-11-20 07:23:21.294685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-11-20 07:23:21.294718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-11-20 07:23:21.294970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-11-20 07:23:21.295004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-11-20 07:23:21.295243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-11-20 07:23:21.295276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-11-20 07:23:21.295487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-11-20 07:23:21.295518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-11-20 07:23:21.295709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-11-20 07:23:21.295741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-11-20 07:23:21.295999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-11-20 07:23:21.296033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-11-20 07:23:21.296234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-11-20 07:23:21.296265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-11-20 07:23:21.296393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-11-20 07:23:21.296424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-11-20 07:23:21.296597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-11-20 07:23:21.296636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-11-20 07:23:21.296919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-11-20 07:23:21.296957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-11-20 07:23:21.297142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-11-20 07:23:21.297175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-11-20 07:23:21.297282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-11-20 07:23:21.297313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-11-20 07:23:21.297604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-11-20 07:23:21.297635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-11-20 07:23:21.297841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-11-20 07:23:21.297873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-11-20 07:23:21.298066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-11-20 07:23:21.298101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-11-20 07:23:21.298231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-11-20 07:23:21.298263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-11-20 07:23:21.298445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-11-20 07:23:21.298478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-11-20 07:23:21.298590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-11-20 07:23:21.298623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-11-20 07:23:21.298863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-11-20 07:23:21.298895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-11-20 07:23:21.299164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-11-20 07:23:21.299198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-11-20 07:23:21.299402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-11-20 07:23:21.299434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-11-20 07:23:21.299672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-11-20 07:23:21.299705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-11-20 07:23:21.299923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-11-20 07:23:21.299966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-11-20 07:23:21.300144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-11-20 07:23:21.300175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-11-20 07:23:21.300358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-11-20 07:23:21.300391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-11-20 07:23:21.300593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-11-20 07:23:21.300625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.034 [2024-11-20 07:23:21.300803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.034 [2024-11-20 07:23:21.300835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.034 qpair failed and we were unable to recover it. 00:27:17.035 [2024-11-20 07:23:21.301077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-11-20 07:23:21.301110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-11-20 07:23:21.301285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-11-20 07:23:21.301318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-11-20 07:23:21.301495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-11-20 07:23:21.301526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-11-20 07:23:21.301650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-11-20 07:23:21.301683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-11-20 07:23:21.301883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-11-20 07:23:21.301915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-11-20 07:23:21.302114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-11-20 07:23:21.302148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-11-20 07:23:21.302289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-11-20 07:23:21.302321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-11-20 07:23:21.302437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-11-20 07:23:21.302469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-11-20 07:23:21.302645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-11-20 07:23:21.302678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-11-20 07:23:21.302940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-11-20 07:23:21.302981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-11-20 07:23:21.303105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-11-20 07:23:21.303139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-11-20 07:23:21.303434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-11-20 07:23:21.303465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-11-20 07:23:21.303588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-11-20 07:23:21.303620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-11-20 07:23:21.303891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-11-20 07:23:21.303923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-11-20 07:23:21.304127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-11-20 07:23:21.304159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-11-20 07:23:21.304373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-11-20 07:23:21.304405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-11-20 07:23:21.304576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-11-20 07:23:21.304609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-11-20 07:23:21.304793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-11-20 07:23:21.304824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-11-20 07:23:21.305018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-11-20 07:23:21.305052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-11-20 07:23:21.305219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-11-20 07:23:21.305251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-11-20 07:23:21.305510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-11-20 07:23:21.305541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-11-20 07:23:21.305756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-11-20 07:23:21.305794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-11-20 07:23:21.305972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-11-20 07:23:21.306006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-11-20 07:23:21.306262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-11-20 07:23:21.306295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-11-20 07:23:21.306487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-11-20 07:23:21.306519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-11-20 07:23:21.306699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-11-20 07:23:21.306729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-11-20 07:23:21.306865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-11-20 07:23:21.306898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-11-20 07:23:21.307031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-11-20 07:23:21.307064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-11-20 07:23:21.307181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-11-20 07:23:21.307213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-11-20 07:23:21.307350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-11-20 07:23:21.307382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.035 [2024-11-20 07:23:21.307499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.035 [2024-11-20 07:23:21.307531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.035 qpair failed and we were unable to recover it. 00:27:17.036 [2024-11-20 07:23:21.307787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-11-20 07:23:21.307819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-11-20 07:23:21.308027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-11-20 07:23:21.308061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-11-20 07:23:21.308341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-11-20 07:23:21.308373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-11-20 07:23:21.308491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-11-20 07:23:21.308523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-11-20 07:23:21.308729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-11-20 07:23:21.308761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-11-20 07:23:21.308986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-11-20 07:23:21.309021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-11-20 07:23:21.309199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-11-20 07:23:21.309232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-11-20 07:23:21.309522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-11-20 07:23:21.309553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-11-20 07:23:21.309729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-11-20 07:23:21.309761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-11-20 07:23:21.309998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-11-20 07:23:21.310030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-11-20 07:23:21.310230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-11-20 07:23:21.310261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-11-20 07:23:21.310399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-11-20 07:23:21.310431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-11-20 07:23:21.310606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-11-20 07:23:21.310638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-11-20 07:23:21.310825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-11-20 07:23:21.310856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-11-20 07:23:21.311041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-11-20 07:23:21.311074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-11-20 07:23:21.311320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-11-20 07:23:21.311352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-11-20 07:23:21.311613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-11-20 07:23:21.311644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-11-20 07:23:21.311892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-11-20 07:23:21.311924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-11-20 07:23:21.312225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-11-20 07:23:21.312259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-11-20 07:23:21.312389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-11-20 07:23:21.312420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-11-20 07:23:21.312616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-11-20 07:23:21.312648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-11-20 07:23:21.312772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-11-20 07:23:21.312804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-11-20 07:23:21.313053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-11-20 07:23:21.313088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-11-20 07:23:21.313259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-11-20 07:23:21.313292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-11-20 07:23:21.313563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-11-20 07:23:21.313595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-11-20 07:23:21.313727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-11-20 07:23:21.313759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-11-20 07:23:21.313878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-11-20 07:23:21.313910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-11-20 07:23:21.314111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-11-20 07:23:21.314143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-11-20 07:23:21.314325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-11-20 07:23:21.314357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-11-20 07:23:21.314546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-11-20 07:23:21.314578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.036 [2024-11-20 07:23:21.314816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.036 [2024-11-20 07:23:21.314854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.036 qpair failed and we were unable to recover it. 00:27:17.037 [2024-11-20 07:23:21.315077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-11-20 07:23:21.315109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-11-20 07:23:21.315377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-11-20 07:23:21.315408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-11-20 07:23:21.315575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-11-20 07:23:21.315607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-11-20 07:23:21.315732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-11-20 07:23:21.315763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-11-20 07:23:21.315889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-11-20 07:23:21.315921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-11-20 07:23:21.316158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-11-20 07:23:21.316228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-11-20 07:23:21.316516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-11-20 07:23:21.316552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-11-20 07:23:21.316764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-11-20 07:23:21.316796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-11-20 07:23:21.316995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-11-20 07:23:21.317029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-11-20 07:23:21.317224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-11-20 07:23:21.317256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-11-20 07:23:21.317466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-11-20 07:23:21.317499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-11-20 07:23:21.317635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-11-20 07:23:21.317667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-11-20 07:23:21.317800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-11-20 07:23:21.317832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-11-20 07:23:21.318086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-11-20 07:23:21.318119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-11-20 07:23:21.318253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-11-20 07:23:21.318285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-11-20 07:23:21.318401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-11-20 07:23:21.318434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-11-20 07:23:21.318624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-11-20 07:23:21.318656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-11-20 07:23:21.318825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-11-20 07:23:21.318858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-11-20 07:23:21.319109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-11-20 07:23:21.319141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-11-20 07:23:21.319313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-11-20 07:23:21.319344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-11-20 07:23:21.319536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-11-20 07:23:21.319567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-11-20 07:23:21.319741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-11-20 07:23:21.319772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-11-20 07:23:21.319890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-11-20 07:23:21.319920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-11-20 07:23:21.320047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-11-20 07:23:21.320083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-11-20 07:23:21.320213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-11-20 07:23:21.320245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-11-20 07:23:21.320387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-11-20 07:23:21.320419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-11-20 07:23:21.320562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-11-20 07:23:21.320594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-11-20 07:23:21.320868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-11-20 07:23:21.320900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-11-20 07:23:21.321179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-11-20 07:23:21.321214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-11-20 07:23:21.321462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-11-20 07:23:21.321496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.037 [2024-11-20 07:23:21.321622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.037 [2024-11-20 07:23:21.321654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.037 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 07:23:21.321937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 07:23:21.321978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 07:23:21.322168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 07:23:21.322201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 07:23:21.322386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 07:23:21.322419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 07:23:21.322541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 07:23:21.322574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 07:23:21.322678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 07:23:21.322712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 07:23:21.322835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 07:23:21.322867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 07:23:21.323127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 07:23:21.323160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 07:23:21.323413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 07:23:21.323446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 07:23:21.323626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 07:23:21.323668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 07:23:21.323884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 07:23:21.323916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 07:23:21.324200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 07:23:21.324236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 07:23:21.324504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 07:23:21.324536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 07:23:21.324639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 07:23:21.324670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 07:23:21.324810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 07:23:21.324841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 07:23:21.325078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 07:23:21.325112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 07:23:21.325324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 07:23:21.325356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 07:23:21.325596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 07:23:21.325627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 07:23:21.325741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 07:23:21.325772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 07:23:21.325943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 07:23:21.325985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 07:23:21.326228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 07:23:21.326260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 07:23:21.326476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 07:23:21.326509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 07:23:21.326632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 07:23:21.326665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 07:23:21.326886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 07:23:21.326918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 07:23:21.327105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 07:23:21.327139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 07:23:21.327406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 07:23:21.327439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 07:23:21.327706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 07:23:21.327738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 07:23:21.327917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 07:23:21.327956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 07:23:21.328151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 07:23:21.328183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 07:23:21.328363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 07:23:21.328395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 07:23:21.328582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 07:23:21.328615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.038 [2024-11-20 07:23:21.328851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.038 [2024-11-20 07:23:21.328884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.038 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 07:23:21.329072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 07:23:21.329107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 07:23:21.329317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 07:23:21.329349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 07:23:21.329561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 07:23:21.329592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 07:23:21.329772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 07:23:21.329804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 07:23:21.329996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 07:23:21.330029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 07:23:21.330213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 07:23:21.330243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 07:23:21.330426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 07:23:21.330457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 07:23:21.330657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 07:23:21.330689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 07:23:21.330879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 07:23:21.330910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 07:23:21.331053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 07:23:21.331086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 07:23:21.331298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 07:23:21.331330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 07:23:21.331502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 07:23:21.331532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 07:23:21.331719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 07:23:21.331750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 07:23:21.331962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 07:23:21.331994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 07:23:21.332183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 07:23:21.332215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 07:23:21.332521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 07:23:21.332553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 07:23:21.332756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 07:23:21.332788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 07:23:21.332916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 07:23:21.332964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 07:23:21.333152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 07:23:21.333184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 07:23:21.333443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 07:23:21.333476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 07:23:21.333737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 07:23:21.333769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 07:23:21.333939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 07:23:21.333984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 07:23:21.334205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 07:23:21.334236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 07:23:21.334408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 07:23:21.334439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 07:23:21.334625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 07:23:21.334657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 07:23:21.334838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 07:23:21.334870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 07:23:21.335106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.039 [2024-11-20 07:23:21.335140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.039 qpair failed and we were unable to recover it. 00:27:17.039 [2024-11-20 07:23:21.335323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 07:23:21.335355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 07:23:21.335478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 07:23:21.335509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 07:23:21.335780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 07:23:21.335811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 07:23:21.335945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 07:23:21.335988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 07:23:21.336137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 07:23:21.336169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 07:23:21.336285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 07:23:21.336317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 07:23:21.336448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 07:23:21.336481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 07:23:21.336731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 07:23:21.336763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 07:23:21.336959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 07:23:21.336992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 07:23:21.337095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 07:23:21.337128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 07:23:21.337251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 07:23:21.337283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 07:23:21.337525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 07:23:21.337556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 07:23:21.337685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 07:23:21.337717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 07:23:21.337906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 07:23:21.337938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 07:23:21.338117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 07:23:21.338148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 07:23:21.338322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 07:23:21.338355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 07:23:21.338592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 07:23:21.338624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 07:23:21.338759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 07:23:21.338790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 07:23:21.338987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 07:23:21.339021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 07:23:21.339205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 07:23:21.339236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 07:23:21.339340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 07:23:21.339371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 07:23:21.339494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 07:23:21.339526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 07:23:21.339650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 07:23:21.339682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 07:23:21.339943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 07:23:21.339985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 07:23:21.340277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 07:23:21.340309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 07:23:21.340570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 07:23:21.340602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 07:23:21.340834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 07:23:21.340866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 07:23:21.341055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 07:23:21.341088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 07:23:21.341276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 07:23:21.341308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 07:23:21.341490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 07:23:21.341522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 07:23:21.341726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 07:23:21.341763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.040 [2024-11-20 07:23:21.341954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.040 [2024-11-20 07:23:21.341985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.040 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 07:23:21.342177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 07:23:21.342209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 07:23:21.342447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 07:23:21.342479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 07:23:21.342652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 07:23:21.342683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 07:23:21.342867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 07:23:21.342899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 07:23:21.343019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 07:23:21.343050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 07:23:21.343166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 07:23:21.343198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 07:23:21.343321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 07:23:21.343352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 07:23:21.343461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 07:23:21.343493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 07:23:21.343676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 07:23:21.343708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 07:23:21.343957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 07:23:21.343990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 07:23:21.344121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 07:23:21.344152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 07:23:21.344415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 07:23:21.344447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 07:23:21.344644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 07:23:21.344676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 07:23:21.344843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 07:23:21.344875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 07:23:21.345054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 07:23:21.345088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 07:23:21.345257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 07:23:21.345290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 07:23:21.345463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 07:23:21.345495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 07:23:21.345665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 07:23:21.345697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 07:23:21.345800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 07:23:21.345832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 07:23:21.346032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 07:23:21.346065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 07:23:21.346181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 07:23:21.346212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 07:23:21.346318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 07:23:21.346349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 07:23:21.346531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 07:23:21.346564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 07:23:21.346753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 07:23:21.346785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 07:23:21.346917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 07:23:21.346963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 07:23:21.347266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 07:23:21.347298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 07:23:21.347565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 07:23:21.347597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 07:23:21.347833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 07:23:21.347866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 07:23:21.348055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 07:23:21.348089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 07:23:21.348260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 07:23:21.348291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 07:23:21.348468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.041 [2024-11-20 07:23:21.348500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.041 qpair failed and we were unable to recover it. 00:27:17.041 [2024-11-20 07:23:21.348617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 07:23:21.348648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 07:23:21.348830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 07:23:21.348862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 07:23:21.349124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 07:23:21.349157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 07:23:21.349329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 07:23:21.349360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 07:23:21.349542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 07:23:21.349574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 07:23:21.349763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 07:23:21.349794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 07:23:21.349919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 07:23:21.349957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 07:23:21.350240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 07:23:21.350277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 07:23:21.350394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 07:23:21.350425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 07:23:21.350612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 07:23:21.350643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 07:23:21.350840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 07:23:21.350872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 07:23:21.351130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 07:23:21.351164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 07:23:21.351424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 07:23:21.351456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 07:23:21.351659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 07:23:21.351691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 07:23:21.351880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 07:23:21.351912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 07:23:21.352161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 07:23:21.352193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 07:23:21.352471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 07:23:21.352504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 07:23:21.352687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 07:23:21.352719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 07:23:21.352889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 07:23:21.352921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 07:23:21.353113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 07:23:21.353144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 07:23:21.353336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 07:23:21.353367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 07:23:21.353510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 07:23:21.353542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 07:23:21.353735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 07:23:21.353767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 07:23:21.353939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 07:23:21.353981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 07:23:21.354102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 07:23:21.354134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 07:23:21.354317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 07:23:21.354350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 07:23:21.354611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 07:23:21.354643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 07:23:21.354823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 07:23:21.354854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 07:23:21.354989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 07:23:21.355023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 07:23:21.355209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 07:23:21.355241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 07:23:21.355423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.042 [2024-11-20 07:23:21.355453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.042 qpair failed and we were unable to recover it. 00:27:17.042 [2024-11-20 07:23:21.355645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 07:23:21.355677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 07:23:21.355867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 07:23:21.355898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 07:23:21.356092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 07:23:21.356125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 07:23:21.356251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 07:23:21.356282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 07:23:21.356544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 07:23:21.356576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 07:23:21.356751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 07:23:21.356783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 07:23:21.356972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 07:23:21.357006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 07:23:21.357129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 07:23:21.357161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 07:23:21.357288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 07:23:21.357318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 07:23:21.357509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 07:23:21.357541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 07:23:21.357711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 07:23:21.357742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 07:23:21.357915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 07:23:21.357955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 07:23:21.358078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 07:23:21.358110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 07:23:21.358350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 07:23:21.358382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 07:23:21.358637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 07:23:21.358669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 07:23:21.358792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 07:23:21.358824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 07:23:21.359006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 07:23:21.359046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 07:23:21.359173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 07:23:21.359206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 07:23:21.359444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 07:23:21.359475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 07:23:21.359731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 07:23:21.359763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 07:23:21.360002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 07:23:21.360035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 07:23:21.360298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 07:23:21.360329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 07:23:21.360540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 07:23:21.360573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 07:23:21.360690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 07:23:21.360722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 07:23:21.360851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 07:23:21.360883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 07:23:21.361067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 07:23:21.361101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 07:23:21.361225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 07:23:21.361256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 07:23:21.361516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 07:23:21.361548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 07:23:21.361784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 07:23:21.361816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 07:23:21.361990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 07:23:21.362022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 07:23:21.362294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 07:23:21.362327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.043 [2024-11-20 07:23:21.362571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.043 [2024-11-20 07:23:21.362602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.043 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 07:23:21.362785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 07:23:21.362817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 07:23:21.362944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 07:23:21.362996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 07:23:21.363117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 07:23:21.363148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 07:23:21.363322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 07:23:21.363355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 07:23:21.363638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 07:23:21.363670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 07:23:21.363876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 07:23:21.363908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 07:23:21.364103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 07:23:21.364137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 07:23:21.364312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 07:23:21.364344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 07:23:21.364460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 07:23:21.364492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 07:23:21.364709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 07:23:21.364742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 07:23:21.364911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 07:23:21.364943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 07:23:21.365201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 07:23:21.365234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 07:23:21.365405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 07:23:21.365436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 07:23:21.365574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 07:23:21.365607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 07:23:21.365797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 07:23:21.365829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 07:23:21.366007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 07:23:21.366040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 07:23:21.366243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 07:23:21.366274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 07:23:21.366513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 07:23:21.366544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 07:23:21.366774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 07:23:21.366806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 07:23:21.366939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 07:23:21.366989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 07:23:21.367179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 07:23:21.367211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 07:23:21.367453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 07:23:21.367484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 07:23:21.367724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 07:23:21.367756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 07:23:21.367887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 07:23:21.367918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 07:23:21.368174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 07:23:21.368214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 07:23:21.368330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 07:23:21.368362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 07:23:21.368599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 07:23:21.368630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 07:23:21.368879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 07:23:21.368911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 07:23:21.369101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 07:23:21.369134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 07:23:21.369330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 07:23:21.369362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 07:23:21.369472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.044 [2024-11-20 07:23:21.369504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.044 qpair failed and we were unable to recover it. 00:27:17.044 [2024-11-20 07:23:21.369756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 07:23:21.369787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 07:23:21.370046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 07:23:21.370079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 07:23:21.370219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 07:23:21.370251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 07:23:21.370436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 07:23:21.370467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 07:23:21.370591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 07:23:21.370623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 07:23:21.370822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 07:23:21.370855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 07:23:21.371101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 07:23:21.371134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 07:23:21.371323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 07:23:21.371355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 07:23:21.371537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 07:23:21.371568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 07:23:21.371853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 07:23:21.371885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 07:23:21.372079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 07:23:21.372112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 07:23:21.372323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 07:23:21.372356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 07:23:21.372487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 07:23:21.372519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 07:23:21.372702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 07:23:21.372734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 07:23:21.372936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 07:23:21.372976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 07:23:21.373180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 07:23:21.373212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 07:23:21.373454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 07:23:21.373486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 07:23:21.373661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 07:23:21.373694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 07:23:21.373881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 07:23:21.373913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 07:23:21.374177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 07:23:21.374211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 07:23:21.374437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 07:23:21.374508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 07:23:21.374660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 07:23:21.374696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 07:23:21.374875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 07:23:21.374909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 07:23:21.375146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 07:23:21.375180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 07:23:21.375442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 07:23:21.375476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 07:23:21.375719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 07:23:21.375751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 07:23:21.375921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.045 [2024-11-20 07:23:21.375965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.045 qpair failed and we were unable to recover it. 00:27:17.045 [2024-11-20 07:23:21.376207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 07:23:21.376238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 07:23:21.376516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 07:23:21.376548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 07:23:21.376715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 07:23:21.376747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 07:23:21.376876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 07:23:21.376907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 07:23:21.377124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 07:23:21.377158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 07:23:21.377399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 07:23:21.377431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 07:23:21.377607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 07:23:21.377649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 07:23:21.377893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 07:23:21.377925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 07:23:21.378143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 07:23:21.378176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 07:23:21.378353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 07:23:21.378385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 07:23:21.378593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 07:23:21.378625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 07:23:21.378813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 07:23:21.378845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 07:23:21.378973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 07:23:21.379007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 07:23:21.379136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 07:23:21.379167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 07:23:21.379429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 07:23:21.379461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 07:23:21.379652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 07:23:21.379684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 07:23:21.379816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 07:23:21.379848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 07:23:21.379984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 07:23:21.380017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 07:23:21.380149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 07:23:21.380181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 07:23:21.380311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 07:23:21.380343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 07:23:21.380484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 07:23:21.380516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 07:23:21.380760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 07:23:21.380792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 07:23:21.380977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 07:23:21.381011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 07:23:21.381289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 07:23:21.381322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 07:23:21.381430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 07:23:21.381462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 07:23:21.381583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 07:23:21.381614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 07:23:21.381799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 07:23:21.381831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 07:23:21.382018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 07:23:21.382051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 07:23:21.382246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 07:23:21.382278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 07:23:21.382413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 07:23:21.382445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 07:23:21.382583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 07:23:21.382615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.046 [2024-11-20 07:23:21.382798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.046 [2024-11-20 07:23:21.382830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.046 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 07:23:21.383095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 07:23:21.383129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 07:23:21.383344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 07:23:21.383377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 07:23:21.383666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 07:23:21.383698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 07:23:21.383881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 07:23:21.383913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 07:23:21.384100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 07:23:21.384133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 07:23:21.384326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 07:23:21.384359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 07:23:21.384552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 07:23:21.384585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 07:23:21.384801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 07:23:21.384833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 07:23:21.384966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 07:23:21.385000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 07:23:21.385146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 07:23:21.385179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 07:23:21.385326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 07:23:21.385359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 07:23:21.385488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 07:23:21.385519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 07:23:21.385721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 07:23:21.385753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 07:23:21.385942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 07:23:21.385981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 07:23:21.386116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 07:23:21.386155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 07:23:21.386368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 07:23:21.386400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 07:23:21.386647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 07:23:21.386679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 07:23:21.386942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 07:23:21.387000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 07:23:21.387112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 07:23:21.387144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 07:23:21.387256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 07:23:21.387287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 07:23:21.387499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 07:23:21.387531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 07:23:21.387771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 07:23:21.387803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 07:23:21.387930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 07:23:21.387978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 07:23:21.388186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 07:23:21.388217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 07:23:21.388436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 07:23:21.388468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 07:23:21.388650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 07:23:21.388681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 07:23:21.388818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 07:23:21.388850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 07:23:21.388973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 07:23:21.389007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 07:23:21.389121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 07:23:21.389154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 07:23:21.389276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 07:23:21.389308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 07:23:21.389481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.047 [2024-11-20 07:23:21.389512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.047 qpair failed and we were unable to recover it. 00:27:17.047 [2024-11-20 07:23:21.389710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 07:23:21.389743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 07:23:21.389930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 07:23:21.389972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 07:23:21.390237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 07:23:21.390270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 07:23:21.390387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 07:23:21.390421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 07:23:21.390552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 07:23:21.390583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 07:23:21.390845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 07:23:21.390878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 07:23:21.391070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 07:23:21.391103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 07:23:21.391372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 07:23:21.391404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 07:23:21.391611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 07:23:21.391643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 07:23:21.391835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 07:23:21.391868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 07:23:21.392066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 07:23:21.392101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 07:23:21.392307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 07:23:21.392340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 07:23:21.392544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 07:23:21.392577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 07:23:21.392763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 07:23:21.392795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 07:23:21.392912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 07:23:21.392944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 07:23:21.393077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 07:23:21.393109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 07:23:21.393297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 07:23:21.393331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 07:23:21.393455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 07:23:21.393486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 07:23:21.393724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 07:23:21.393757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 07:23:21.393869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 07:23:21.393901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 07:23:21.394175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 07:23:21.394210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 07:23:21.394336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 07:23:21.394367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 07:23:21.394605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 07:23:21.394637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 07:23:21.394847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 07:23:21.394884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 07:23:21.395148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 07:23:21.395182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 07:23:21.395367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 07:23:21.395400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 07:23:21.395538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 07:23:21.395570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 07:23:21.395687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 07:23:21.395719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 07:23:21.395892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 07:23:21.395925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 07:23:21.396077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 07:23:21.396109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 07:23:21.396228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 07:23:21.396261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.048 [2024-11-20 07:23:21.396451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.048 [2024-11-20 07:23:21.396483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.048 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 07:23:21.396669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 07:23:21.396701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 07:23:21.396968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 07:23:21.397003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 07:23:21.397196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 07:23:21.397228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 07:23:21.397407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 07:23:21.397439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 07:23:21.397678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 07:23:21.397710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 07:23:21.397982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 07:23:21.398017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 07:23:21.398213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 07:23:21.398245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 07:23:21.398431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 07:23:21.398463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 07:23:21.398594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 07:23:21.398627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 07:23:21.398800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 07:23:21.398832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 07:23:21.398997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 07:23:21.399030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 07:23:21.399272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 07:23:21.399306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 07:23:21.399495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 07:23:21.399528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 07:23:21.399744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 07:23:21.399777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 07:23:21.399978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 07:23:21.400012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 07:23:21.400218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 07:23:21.400252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 07:23:21.400383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 07:23:21.400415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 07:23:21.400535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 07:23:21.400567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 07:23:21.400790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 07:23:21.400861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 07:23:21.401017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 07:23:21.401054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 07:23:21.401304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 07:23:21.401338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 07:23:21.401560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 07:23:21.401592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 07:23:21.401835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 07:23:21.401867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 07:23:21.402059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 07:23:21.402093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 07:23:21.402222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 07:23:21.402253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 07:23:21.402427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 07:23:21.402459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 07:23:21.402583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 07:23:21.402615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 07:23:21.402810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 07:23:21.402841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 07:23:21.403045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 07:23:21.403078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 07:23:21.403272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 07:23:21.403304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 07:23:21.403508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.049 [2024-11-20 07:23:21.403539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.049 qpair failed and we were unable to recover it. 00:27:17.049 [2024-11-20 07:23:21.403780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 07:23:21.403811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 07:23:21.403963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 07:23:21.403997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 07:23:21.404119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 07:23:21.404151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 07:23:21.404334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 07:23:21.404365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 07:23:21.404551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 07:23:21.404584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 07:23:21.404841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 07:23:21.404873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 07:23:21.405042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 07:23:21.405076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 07:23:21.405325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 07:23:21.405356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 07:23:21.405478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 07:23:21.405509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 07:23:21.405702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 07:23:21.405735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 07:23:21.405979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 07:23:21.406011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 07:23:21.406250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 07:23:21.406283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 07:23:21.406471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 07:23:21.406503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 07:23:21.406689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 07:23:21.406721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 07:23:21.406972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 07:23:21.407005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 07:23:21.407119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 07:23:21.407150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 07:23:21.407392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 07:23:21.407423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 07:23:21.407598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 07:23:21.407629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 07:23:21.407751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 07:23:21.407783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 07:23:21.407912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 07:23:21.407943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 07:23:21.408123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 07:23:21.408155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 07:23:21.408280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 07:23:21.408313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 07:23:21.408504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 07:23:21.408534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 07:23:21.408772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 07:23:21.408804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 07:23:21.409013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 07:23:21.409047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 07:23:21.409286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 07:23:21.409319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 07:23:21.409609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 07:23:21.409642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.050 [2024-11-20 07:23:21.409884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.050 [2024-11-20 07:23:21.409921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.050 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 07:23:21.410061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 07:23:21.410093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 07:23:21.410223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 07:23:21.410254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 07:23:21.410440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 07:23:21.410471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 07:23:21.410712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 07:23:21.410744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 07:23:21.410917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 07:23:21.410962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 07:23:21.411136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 07:23:21.411168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 07:23:21.411439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 07:23:21.411471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 07:23:21.411680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 07:23:21.411712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 07:23:21.411823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 07:23:21.411854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 07:23:21.411999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 07:23:21.412033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 07:23:21.412141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 07:23:21.412174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 07:23:21.412299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 07:23:21.412330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 07:23:21.412512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 07:23:21.412544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 07:23:21.412820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 07:23:21.412852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 07:23:21.412971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 07:23:21.413004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 07:23:21.413188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 07:23:21.413219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 07:23:21.413337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 07:23:21.413369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 07:23:21.413617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 07:23:21.413648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 07:23:21.413834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 07:23:21.413865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 07:23:21.413972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 07:23:21.414005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 07:23:21.414193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 07:23:21.414225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 07:23:21.414396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 07:23:21.414428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 07:23:21.414601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 07:23:21.414632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 07:23:21.414747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 07:23:21.414778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 07:23:21.414912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 07:23:21.414943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 07:23:21.415070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 07:23:21.415103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 07:23:21.415348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 07:23:21.415379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 07:23:21.415560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 07:23:21.415592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 07:23:21.415795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 07:23:21.415827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 07:23:21.416010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 07:23:21.416042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 07:23:21.416234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.051 [2024-11-20 07:23:21.416266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.051 qpair failed and we were unable to recover it. 00:27:17.051 [2024-11-20 07:23:21.416508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 07:23:21.416539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 07:23:21.416656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 07:23:21.416687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 07:23:21.416817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 07:23:21.416847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 07:23:21.417088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 07:23:21.417120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 07:23:21.417293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 07:23:21.417331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 07:23:21.417522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 07:23:21.417553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 07:23:21.417676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 07:23:21.417708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 07:23:21.417830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 07:23:21.417862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 07:23:21.418043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 07:23:21.418082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 07:23:21.418221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 07:23:21.418254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 07:23:21.418379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 07:23:21.418411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 07:23:21.418533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 07:23:21.418564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 07:23:21.418765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 07:23:21.418797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 07:23:21.418985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 07:23:21.419018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 07:23:21.419256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 07:23:21.419288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 07:23:21.419465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 07:23:21.419498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 07:23:21.419677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 07:23:21.419709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 07:23:21.419974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 07:23:21.420007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 07:23:21.420267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 07:23:21.420299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 07:23:21.420563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 07:23:21.420594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 07:23:21.420716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 07:23:21.420747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 07:23:21.420939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 07:23:21.420980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 07:23:21.421107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 07:23:21.421139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 07:23:21.421400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 07:23:21.421431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 07:23:21.421683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 07:23:21.421715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 07:23:21.421848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 07:23:21.421880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 07:23:21.422088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 07:23:21.422121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 07:23:21.422252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 07:23:21.422284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 07:23:21.422420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 07:23:21.422452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 07:23:21.422625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 07:23:21.422656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 07:23:21.422786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 07:23:21.422817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.052 [2024-11-20 07:23:21.422966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.052 [2024-11-20 07:23:21.423001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.052 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 07:23:21.423109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 07:23:21.423141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 07:23:21.423319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 07:23:21.423350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 07:23:21.423547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 07:23:21.423579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 07:23:21.423723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 07:23:21.423755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 07:23:21.423877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 07:23:21.423909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 07:23:21.424104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 07:23:21.424136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 07:23:21.424375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 07:23:21.424406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 07:23:21.424582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 07:23:21.424614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 07:23:21.424853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 07:23:21.424884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 07:23:21.425056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 07:23:21.425088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 07:23:21.425326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 07:23:21.425357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 07:23:21.425549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 07:23:21.425581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 07:23:21.425752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 07:23:21.425784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 07:23:21.425969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 07:23:21.426002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 07:23:21.426194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 07:23:21.426224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 07:23:21.426432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 07:23:21.426463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 07:23:21.426650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 07:23:21.426687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 07:23:21.426972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 07:23:21.427004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 07:23:21.427266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 07:23:21.427299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 07:23:21.427490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 07:23:21.427522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 07:23:21.427699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 07:23:21.427731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 07:23:21.427990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 07:23:21.428023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 07:23:21.428196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 07:23:21.428228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 07:23:21.428494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 07:23:21.428525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 07:23:21.428819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 07:23:21.428850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 07:23:21.429025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 07:23:21.429058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 07:23:21.429191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 07:23:21.429223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 07:23:21.429347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 07:23:21.429379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 07:23:21.429641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 07:23:21.429672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 07:23:21.429916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 07:23:21.429954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 07:23:21.430158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.053 [2024-11-20 07:23:21.430190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.053 qpair failed and we were unable to recover it. 00:27:17.053 [2024-11-20 07:23:21.430307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 07:23:21.430339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 07:23:21.430480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 07:23:21.430512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 07:23:21.430628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 07:23:21.430660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 07:23:21.430901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 07:23:21.430932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 07:23:21.431084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 07:23:21.431116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 07:23:21.431290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 07:23:21.431322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 07:23:21.431517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 07:23:21.431548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 07:23:21.431743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 07:23:21.431774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 07:23:21.431999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 07:23:21.432033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 07:23:21.432226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 07:23:21.432258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 07:23:21.432474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 07:23:21.432506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 07:23:21.432639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 07:23:21.432671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 07:23:21.432866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 07:23:21.432898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 07:23:21.433127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 07:23:21.433160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 07:23:21.433341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 07:23:21.433373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 07:23:21.433506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 07:23:21.433537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 07:23:21.433721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 07:23:21.433752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 07:23:21.433968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 07:23:21.434001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 07:23:21.434194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 07:23:21.434226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 07:23:21.434425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 07:23:21.434458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 07:23:21.434635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 07:23:21.434666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 07:23:21.434917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 07:23:21.434958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 07:23:21.435093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 07:23:21.435125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 07:23:21.435250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 07:23:21.435283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 07:23:21.435405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 07:23:21.435437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 07:23:21.435634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 07:23:21.435672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 07:23:21.435917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 07:23:21.435959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 07:23:21.436143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 07:23:21.436175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 07:23:21.436313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 07:23:21.436345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 07:23:21.436543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 07:23:21.436575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 07:23:21.436749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 07:23:21.436780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.054 [2024-11-20 07:23:21.436963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.054 [2024-11-20 07:23:21.436995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.054 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 07:23:21.437117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 07:23:21.437149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 07:23:21.437357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 07:23:21.437389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 07:23:21.437560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 07:23:21.437593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 07:23:21.437720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 07:23:21.437752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 07:23:21.437867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 07:23:21.437900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 07:23:21.438027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 07:23:21.438078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 07:23:21.438217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 07:23:21.438250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 07:23:21.438433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 07:23:21.438466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 07:23:21.438651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 07:23:21.438683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 07:23:21.438865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 07:23:21.438897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 07:23:21.439085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 07:23:21.439118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 07:23:21.439293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 07:23:21.439325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 07:23:21.439445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 07:23:21.439477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 07:23:21.439590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 07:23:21.439621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 07:23:21.439730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 07:23:21.439761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 07:23:21.439892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 07:23:21.439924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 07:23:21.440137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 07:23:21.440170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 07:23:21.440368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 07:23:21.440400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 07:23:21.440577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 07:23:21.440609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 07:23:21.440725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 07:23:21.440756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 07:23:21.440971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 07:23:21.441005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 07:23:21.441122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 07:23:21.441155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 07:23:21.441280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 07:23:21.441311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 07:23:21.441483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 07:23:21.441515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 07:23:21.441760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 07:23:21.441791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 07:23:21.442038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 07:23:21.442071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 07:23:21.442244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 07:23:21.442275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 07:23:21.442465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 07:23:21.442498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 07:23:21.442613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 07:23:21.442645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 07:23:21.442766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 07:23:21.442798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.055 [2024-11-20 07:23:21.442911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.055 [2024-11-20 07:23:21.442943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.055 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 07:23:21.443163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 07:23:21.443195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 07:23:21.443322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 07:23:21.443354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 07:23:21.443483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 07:23:21.443522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 07:23:21.443701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 07:23:21.443731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 07:23:21.443904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 07:23:21.443936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 07:23:21.444092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 07:23:21.444125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 07:23:21.444317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 07:23:21.444350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 07:23:21.444527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 07:23:21.444560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 07:23:21.444759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 07:23:21.444791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 07:23:21.444980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 07:23:21.445014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 07:23:21.445307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 07:23:21.445339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 07:23:21.445564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 07:23:21.445596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 07:23:21.445719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 07:23:21.445750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 07:23:21.445879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 07:23:21.445911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 07:23:21.446215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 07:23:21.446248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 07:23:21.446429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 07:23:21.446460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 07:23:21.446587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 07:23:21.446620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 07:23:21.446863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 07:23:21.446896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 07:23:21.447087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 07:23:21.447119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 07:23:21.447398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 07:23:21.447430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 07:23:21.447551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 07:23:21.447583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 07:23:21.447777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 07:23:21.447808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 07:23:21.448098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 07:23:21.448131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 07:23:21.448329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 07:23:21.448361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 07:23:21.448494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 07:23:21.448526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 07:23:21.448736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 07:23:21.448768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 07:23:21.448877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.056 [2024-11-20 07:23:21.448909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.056 qpair failed and we were unable to recover it. 00:27:17.056 [2024-11-20 07:23:21.449106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 07:23:21.449138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 07:23:21.449256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 07:23:21.449288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 07:23:21.449414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 07:23:21.449446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 07:23:21.449631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 07:23:21.449662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 07:23:21.449864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 07:23:21.449897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 07:23:21.450097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 07:23:21.450130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 07:23:21.450320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 07:23:21.450351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 07:23:21.450536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 07:23:21.450568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 07:23:21.450836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 07:23:21.450869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 07:23:21.451054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 07:23:21.451088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 07:23:21.451211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 07:23:21.451243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 07:23:21.451431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 07:23:21.451462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 07:23:21.451643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 07:23:21.451675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 07:23:21.451809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 07:23:21.451841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 07:23:21.451994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 07:23:21.452027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 07:23:21.452236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 07:23:21.452275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 07:23:21.452385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 07:23:21.452417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 07:23:21.452545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 07:23:21.452576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 07:23:21.452752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 07:23:21.452784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 07:23:21.452918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 07:23:21.452958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 07:23:21.453086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 07:23:21.453117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 07:23:21.453369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 07:23:21.453401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 07:23:21.453590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 07:23:21.453622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 07:23:21.453736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 07:23:21.453768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 07:23:21.453878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 07:23:21.453910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 07:23:21.454160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 07:23:21.454195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 07:23:21.454392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 07:23:21.454423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 07:23:21.454596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 07:23:21.454628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 07:23:21.454828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 07:23:21.454860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 07:23:21.454998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 07:23:21.455031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 07:23:21.455152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 07:23:21.455185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.057 qpair failed and we were unable to recover it. 00:27:17.057 [2024-11-20 07:23:21.455355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.057 [2024-11-20 07:23:21.455389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 07:23:21.455626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 07:23:21.455659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 07:23:21.455788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 07:23:21.455821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 07:23:21.456072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 07:23:21.456106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 07:23:21.456291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 07:23:21.456323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 07:23:21.456444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 07:23:21.456475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 07:23:21.456718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 07:23:21.456750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 07:23:21.456945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 07:23:21.456985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 07:23:21.457175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 07:23:21.457207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 07:23:21.457391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 07:23:21.457422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 07:23:21.457613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 07:23:21.457644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 07:23:21.457826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 07:23:21.457858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 07:23:21.457966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 07:23:21.457999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 07:23:21.458102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 07:23:21.458133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 07:23:21.458263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 07:23:21.458295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 07:23:21.458397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 07:23:21.458428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 07:23:21.458555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 07:23:21.458588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 07:23:21.458856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 07:23:21.458888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 07:23:21.459006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 07:23:21.459038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 07:23:21.459253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 07:23:21.459285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 07:23:21.459473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 07:23:21.459505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 07:23:21.459705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 07:23:21.459736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 07:23:21.459927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 07:23:21.459980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 07:23:21.460172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 07:23:21.460204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 07:23:21.460385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 07:23:21.460423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 07:23:21.460614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 07:23:21.460645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 07:23:21.460830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 07:23:21.460862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 07:23:21.460978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 07:23:21.461011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 07:23:21.461119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 07:23:21.461150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 07:23:21.461334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 07:23:21.461366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 07:23:21.461487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 07:23:21.461518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 07:23:21.461712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.058 [2024-11-20 07:23:21.461745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.058 qpair failed and we were unable to recover it. 00:27:17.058 [2024-11-20 07:23:21.461920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 07:23:21.461957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 07:23:21.462077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 07:23:21.462108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 07:23:21.462395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 07:23:21.462427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 07:23:21.462672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 07:23:21.462703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 07:23:21.462887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 07:23:21.462919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 07:23:21.463055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 07:23:21.463087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 07:23:21.463384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 07:23:21.463416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 07:23:21.463540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 07:23:21.463572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 07:23:21.463684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 07:23:21.463716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 07:23:21.463960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 07:23:21.463993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 07:23:21.464182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 07:23:21.464215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 07:23:21.464336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 07:23:21.464368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 07:23:21.464486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 07:23:21.464518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 07:23:21.464694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 07:23:21.464725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 07:23:21.464851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 07:23:21.464883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 07:23:21.465002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 07:23:21.465034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 07:23:21.465206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 07:23:21.465238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 07:23:21.465442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 07:23:21.465474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 07:23:21.465587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 07:23:21.465619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 07:23:21.465672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1029af0 (9): Bad file descriptor 00:27:17.059 [2024-11-20 07:23:21.465933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 07:23:21.466021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 07:23:21.466332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 07:23:21.466367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 07:23:21.466636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 07:23:21.466670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 07:23:21.466857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 07:23:21.466890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 07:23:21.467123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 07:23:21.467157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 07:23:21.467330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 07:23:21.467363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 07:23:21.467560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 07:23:21.467592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 07:23:21.467776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 07:23:21.467807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 07:23:21.467923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 07:23:21.467967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 07:23:21.468235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 07:23:21.468266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 07:23:21.468510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 07:23:21.468541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 07:23:21.468721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.059 [2024-11-20 07:23:21.468754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.059 qpair failed and we were unable to recover it. 00:27:17.059 [2024-11-20 07:23:21.468959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 07:23:21.468993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 07:23:21.469174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 07:23:21.469206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 07:23:21.469397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 07:23:21.469428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 07:23:21.469620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 07:23:21.469651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 07:23:21.469871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 07:23:21.469903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 07:23:21.470090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 07:23:21.470123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 07:23:21.470277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 07:23:21.470309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 07:23:21.470484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 07:23:21.470515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 07:23:21.470685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 07:23:21.470718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 07:23:21.470963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 07:23:21.470997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 07:23:21.471166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 07:23:21.471197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 07:23:21.471388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 07:23:21.471419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 07:23:21.471548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 07:23:21.471579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 07:23:21.471841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 07:23:21.471872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 07:23:21.472112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 07:23:21.472145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 07:23:21.472271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 07:23:21.472302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 07:23:21.472488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 07:23:21.472520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 07:23:21.472656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 07:23:21.472686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 07:23:21.472811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 07:23:21.472842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 07:23:21.473025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 07:23:21.473057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 07:23:21.473182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 07:23:21.473214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 07:23:21.473368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 07:23:21.473400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 07:23:21.473566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 07:23:21.473598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 07:23:21.473720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 07:23:21.473751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 07:23:21.474019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 07:23:21.474051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 07:23:21.474170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 07:23:21.474201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 07:23:21.474389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 07:23:21.474420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 07:23:21.474656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 07:23:21.474687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 07:23:21.474862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 07:23:21.474899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 07:23:21.475080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 07:23:21.475112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 07:23:21.475348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 07:23:21.475380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 07:23:21.475563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.060 [2024-11-20 07:23:21.475595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.060 qpair failed and we were unable to recover it. 00:27:17.060 [2024-11-20 07:23:21.475769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 07:23:21.475800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 07:23:21.475921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 07:23:21.475961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 07:23:21.476164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 07:23:21.476195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 07:23:21.476375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 07:23:21.476405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 07:23:21.476529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 07:23:21.476560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 07:23:21.476739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 07:23:21.476769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 07:23:21.477031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 07:23:21.477064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 07:23:21.477257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 07:23:21.477288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 07:23:21.477462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 07:23:21.477493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 07:23:21.477689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 07:23:21.477720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 07:23:21.477833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 07:23:21.477863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 07:23:21.478060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 07:23:21.478093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 07:23:21.478203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 07:23:21.478233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 07:23:21.478404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 07:23:21.478435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 07:23:21.478536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 07:23:21.478567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 07:23:21.478703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 07:23:21.478733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 07:23:21.478873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 07:23:21.478903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 07:23:21.479188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 07:23:21.479220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 07:23:21.479430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 07:23:21.479462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 07:23:21.479634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 07:23:21.479665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 07:23:21.479786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 07:23:21.479816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 07:23:21.480010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 07:23:21.480043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 07:23:21.480169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 07:23:21.480200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 07:23:21.480305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 07:23:21.480341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 07:23:21.480537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 07:23:21.480568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 07:23:21.480733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 07:23:21.480764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 07:23:21.480986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 07:23:21.481020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 07:23:21.481213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 07:23:21.481245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.061 [2024-11-20 07:23:21.481354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.061 [2024-11-20 07:23:21.481385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.061 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 07:23:21.481491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 07:23:21.481521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 07:23:21.481729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 07:23:21.481760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 07:23:21.481960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 07:23:21.481993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 07:23:21.482222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 07:23:21.482254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 07:23:21.482467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 07:23:21.482497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 07:23:21.482624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 07:23:21.482655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 07:23:21.482843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 07:23:21.482874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 07:23:21.483057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 07:23:21.483088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 07:23:21.483280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 07:23:21.483312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 07:23:21.483501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 07:23:21.483533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 07:23:21.483737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 07:23:21.483768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 07:23:21.483969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 07:23:21.484002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 07:23:21.484114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 07:23:21.484145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 07:23:21.484257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 07:23:21.484288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 07:23:21.484407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 07:23:21.484438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 07:23:21.484680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 07:23:21.484711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 07:23:21.484904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 07:23:21.484936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 07:23:21.485051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 07:23:21.485082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 07:23:21.485257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 07:23:21.485288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 07:23:21.485406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 07:23:21.485437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 07:23:21.485647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 07:23:21.485678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 07:23:21.485866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 07:23:21.485908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 07:23:21.486105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 07:23:21.486138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 07:23:21.486340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 07:23:21.486371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 07:23:21.486647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 07:23:21.486678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 07:23:21.486848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 07:23:21.486879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 07:23:21.487008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 07:23:21.487041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 07:23:21.487252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 07:23:21.487283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 07:23:21.487570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 07:23:21.487601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 07:23:21.487734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 07:23:21.487764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 07:23:21.487939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 07:23:21.488001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.062 [2024-11-20 07:23:21.488181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.062 [2024-11-20 07:23:21.488212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.062 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 07:23:21.488397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 07:23:21.488429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 07:23:21.488681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 07:23:21.488713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 07:23:21.488905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 07:23:21.488937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 07:23:21.489082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 07:23:21.489116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 07:23:21.489373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 07:23:21.489404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 07:23:21.489616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 07:23:21.489646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 07:23:21.489835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 07:23:21.489868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 07:23:21.490055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 07:23:21.490088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 07:23:21.490329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 07:23:21.490361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 07:23:21.490633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 07:23:21.490665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 07:23:21.490833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 07:23:21.490864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 07:23:21.490986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 07:23:21.491018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 07:23:21.491188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 07:23:21.491220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 07:23:21.491406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 07:23:21.491438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 07:23:21.491571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 07:23:21.491603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 07:23:21.491773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 07:23:21.491805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 07:23:21.491913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 07:23:21.491944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 07:23:21.492100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 07:23:21.492132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 07:23:21.492318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 07:23:21.492350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 07:23:21.492533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 07:23:21.492563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 07:23:21.492737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 07:23:21.492769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 07:23:21.492962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 07:23:21.492995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 07:23:21.493236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 07:23:21.493268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 07:23:21.493457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 07:23:21.493489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 07:23:21.493687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 07:23:21.493717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 07:23:21.493916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 07:23:21.493954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 07:23:21.494079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 07:23:21.494111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 07:23:21.494282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 07:23:21.494312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 07:23:21.494492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 07:23:21.494523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 07:23:21.494782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 07:23:21.494814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 07:23:21.495054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 07:23:21.495087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 07:23:21.495250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 07:23:21.495282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 07:23:21.495520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.063 [2024-11-20 07:23:21.495551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.063 qpair failed and we were unable to recover it. 00:27:17.063 [2024-11-20 07:23:21.495663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 07:23:21.495693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 07:23:21.495909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 07:23:21.495940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 07:23:21.496165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 07:23:21.496197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 07:23:21.496371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 07:23:21.496402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 07:23:21.496590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 07:23:21.496621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 07:23:21.496747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 07:23:21.496779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 07:23:21.497199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 07:23:21.497235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 07:23:21.497562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 07:23:21.497598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 07:23:21.497828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 07:23:21.497859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 07:23:21.498075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 07:23:21.498108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 07:23:21.498251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 07:23:21.498282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 07:23:21.498497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 07:23:21.498530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 07:23:21.498657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 07:23:21.498688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 07:23:21.498862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 07:23:21.498894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 07:23:21.499127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 07:23:21.499160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 07:23:21.499371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 07:23:21.499402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 07:23:21.499541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 07:23:21.499573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 07:23:21.499679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 07:23:21.499711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 07:23:21.499960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 07:23:21.499993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 07:23:21.500102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 07:23:21.500134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 07:23:21.500309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 07:23:21.500340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 07:23:21.500548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 07:23:21.500579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 07:23:21.500845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 07:23:21.500876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 07:23:21.501063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 07:23:21.501096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 07:23:21.501331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 07:23:21.501369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 07:23:21.501495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 07:23:21.501527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 07:23:21.501729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 07:23:21.501760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 07:23:21.501898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 07:23:21.501929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 07:23:21.502138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 07:23:21.502170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 07:23:21.502430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 07:23:21.502462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 07:23:21.502662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 07:23:21.502693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 07:23:21.502825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 07:23:21.502856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 07:23:21.503038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 07:23:21.503071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 07:23:21.503310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 07:23:21.503342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.064 [2024-11-20 07:23:21.503544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.064 [2024-11-20 07:23:21.503576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.064 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 07:23:21.503797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 07:23:21.503828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 07:23:21.503985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 07:23:21.504020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 07:23:21.504327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 07:23:21.504359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 07:23:21.504539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 07:23:21.504571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 07:23:21.504762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 07:23:21.504793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 07:23:21.504975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 07:23:21.505008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 07:23:21.505201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 07:23:21.505233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 07:23:21.505404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 07:23:21.505435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 07:23:21.505618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 07:23:21.505650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 07:23:21.505784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 07:23:21.505816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 07:23:21.506009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 07:23:21.506042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 07:23:21.506299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 07:23:21.506330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 07:23:21.506595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 07:23:21.506627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 07:23:21.506813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 07:23:21.506844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 07:23:21.507066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 07:23:21.507099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 07:23:21.507234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 07:23:21.507266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 07:23:21.507475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 07:23:21.507512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 07:23:21.507694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 07:23:21.507725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 07:23:21.507861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 07:23:21.507893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 07:23:21.508085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 07:23:21.508117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 07:23:21.508382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 07:23:21.508413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 07:23:21.508622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 07:23:21.508653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 07:23:21.508756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 07:23:21.508787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 07:23:21.508907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 07:23:21.508938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 07:23:21.509135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 07:23:21.509167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 07:23:21.509289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 07:23:21.509319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 07:23:21.509495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 07:23:21.509526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 07:23:21.509723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 07:23:21.509753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 07:23:21.509973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 07:23:21.510006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 07:23:21.510179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 07:23:21.510211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 07:23:21.510387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 07:23:21.510418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 07:23:21.510627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 07:23:21.510658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 07:23:21.510778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 07:23:21.510810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 07:23:21.511072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.065 [2024-11-20 07:23:21.511105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.065 qpair failed and we were unable to recover it. 00:27:17.065 [2024-11-20 07:23:21.511278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 07:23:21.511310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 07:23:21.511419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 07:23:21.511450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 07:23:21.511708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 07:23:21.511739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 07:23:21.511929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 07:23:21.511981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 07:23:21.512116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 07:23:21.512148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 07:23:21.512355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 07:23:21.512387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 07:23:21.512561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 07:23:21.512591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 07:23:21.512708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 07:23:21.512739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 07:23:21.512915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 07:23:21.512956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 07:23:21.513148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 07:23:21.513179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 07:23:21.513391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 07:23:21.513423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 07:23:21.513664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 07:23:21.513695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 07:23:21.513830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 07:23:21.513861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 07:23:21.514053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 07:23:21.514086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 07:23:21.514294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 07:23:21.514325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 07:23:21.514453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 07:23:21.514483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 07:23:21.514676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 07:23:21.514707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 07:23:21.514892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 07:23:21.514922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 07:23:21.515067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 07:23:21.515100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 07:23:21.515288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 07:23:21.515320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 07:23:21.515492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 07:23:21.515523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 07:23:21.515723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 07:23:21.515754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 07:23:21.515945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 07:23:21.515986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 07:23:21.516205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 07:23:21.516237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 07:23:21.516346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 07:23:21.516377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 07:23:21.516641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 07:23:21.516673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 07:23:21.516866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 07:23:21.516897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 07:23:21.517169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 07:23:21.517202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 07:23:21.517389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 07:23:21.517420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 07:23:21.517659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 07:23:21.517690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 07:23:21.517875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 07:23:21.517907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 07:23:21.518047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 07:23:21.518079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 07:23:21.518344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.066 [2024-11-20 07:23:21.518375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.066 qpair failed and we were unable to recover it. 00:27:17.066 [2024-11-20 07:23:21.518480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 07:23:21.518511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 07:23:21.518626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 07:23:21.518658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 07:23:21.518849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 07:23:21.518880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 07:23:21.518992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 07:23:21.519025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 07:23:21.519222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 07:23:21.519254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 07:23:21.519472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 07:23:21.519504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 07:23:21.519693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 07:23:21.519724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 07:23:21.519844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 07:23:21.519875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 07:23:21.520059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 07:23:21.520092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 07:23:21.520288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 07:23:21.520321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 07:23:21.520445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 07:23:21.520476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 07:23:21.520657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 07:23:21.520688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 07:23:21.520813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 07:23:21.520845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 07:23:21.520968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 07:23:21.521001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 07:23:21.521265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 07:23:21.521296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 07:23:21.521474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 07:23:21.521506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 07:23:21.521686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 07:23:21.521716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 07:23:21.521924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 07:23:21.521970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 07:23:21.522173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 07:23:21.522205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 07:23:21.522344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 07:23:21.522375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 07:23:21.522511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 07:23:21.522542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 07:23:21.522670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 07:23:21.522702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 07:23:21.522970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 07:23:21.523002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 07:23:21.523139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 07:23:21.523171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 07:23:21.523318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 07:23:21.523350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 07:23:21.523531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 07:23:21.523561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 07:23:21.523684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 07:23:21.523716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 07:23:21.523841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 07:23:21.523873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 07:23:21.524085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 07:23:21.524118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 07:23:21.524379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 07:23:21.524411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 07:23:21.524593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 07:23:21.524625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 07:23:21.524754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 07:23:21.524785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 07:23:21.524913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 07:23:21.524944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 07:23:21.525058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 07:23:21.525089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 07:23:21.525283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 07:23:21.525314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 07:23:21.525435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 07:23:21.525467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 07:23:21.525586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.067 [2024-11-20 07:23:21.525617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.067 qpair failed and we were unable to recover it. 00:27:17.067 [2024-11-20 07:23:21.525801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 07:23:21.525831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 07:23:21.525978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 07:23:21.526011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 07:23:21.526197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 07:23:21.526229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 07:23:21.526414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 07:23:21.526446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 07:23:21.526558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 07:23:21.526589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 07:23:21.526714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 07:23:21.526746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 07:23:21.526971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 07:23:21.527005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 07:23:21.527183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 07:23:21.527220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 07:23:21.527487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 07:23:21.527519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 07:23:21.527695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 07:23:21.527726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 07:23:21.528005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 07:23:21.528038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 07:23:21.528305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 07:23:21.528338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 07:23:21.528531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 07:23:21.528568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 07:23:21.528685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 07:23:21.528718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 07:23:21.528930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 07:23:21.528971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 07:23:21.529089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 07:23:21.529120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 07:23:21.529381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 07:23:21.529412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 07:23:21.529523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 07:23:21.529554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 07:23:21.529801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 07:23:21.529833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 07:23:21.529993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 07:23:21.530027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 07:23:21.530143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 07:23:21.530175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 07:23:21.530362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 07:23:21.530394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 07:23:21.530735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 07:23:21.530766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 07:23:21.530962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 07:23:21.530996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 07:23:21.531190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 07:23:21.531221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 07:23:21.531458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 07:23:21.531491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 07:23:21.531755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 07:23:21.531786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 07:23:21.532029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 07:23:21.532063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 07:23:21.532301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 07:23:21.532333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 07:23:21.532531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 07:23:21.532563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 07:23:21.532677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 07:23:21.532708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 07:23:21.532988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 07:23:21.533022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 07:23:21.533206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 07:23:21.533237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 07:23:21.533356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 07:23:21.533388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 07:23:21.533507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 07:23:21.533545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 07:23:21.533717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 07:23:21.533750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 07:23:21.533930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.068 [2024-11-20 07:23:21.533969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.068 qpair failed and we were unable to recover it. 00:27:17.068 [2024-11-20 07:23:21.534179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 07:23:21.534211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 07:23:21.534380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 07:23:21.534411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 07:23:21.534525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 07:23:21.534555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 07:23:21.534759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 07:23:21.534790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 07:23:21.535068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 07:23:21.535101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 07:23:21.535289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 07:23:21.535320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 07:23:21.535506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 07:23:21.535537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 07:23:21.535667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 07:23:21.535698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 07:23:21.535816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 07:23:21.535847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 07:23:21.536039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 07:23:21.536072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 07:23:21.536259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 07:23:21.536292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 07:23:21.536473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 07:23:21.536506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 07:23:21.536773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 07:23:21.536805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 07:23:21.536986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 07:23:21.537019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 07:23:21.537142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 07:23:21.537174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 07:23:21.537289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 07:23:21.537321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 07:23:21.537440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 07:23:21.537472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 07:23:21.537580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 07:23:21.537611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 07:23:21.537717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 07:23:21.537748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 07:23:21.537989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 07:23:21.538023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 07:23:21.538192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 07:23:21.538223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 07:23:21.538360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 07:23:21.538392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 07:23:21.538564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 07:23:21.538595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 07:23:21.538855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 07:23:21.538888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 07:23:21.539083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 07:23:21.539115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 07:23:21.539307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 07:23:21.539339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 07:23:21.539481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 07:23:21.539512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 07:23:21.539683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 07:23:21.539713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 07:23:21.539897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 07:23:21.539929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 07:23:21.540130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 07:23:21.540163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 07:23:21.540405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 07:23:21.540437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 07:23:21.540609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 07:23:21.540641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 07:23:21.540879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 07:23:21.540912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 07:23:21.541092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 07:23:21.541125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 07:23:21.541227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 07:23:21.541259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 07:23:21.541449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 07:23:21.541481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 07:23:21.541656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 07:23:21.541687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.069 [2024-11-20 07:23:21.541863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.069 [2024-11-20 07:23:21.541894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.069 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 07:23:21.542134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 07:23:21.542168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 07:23:21.542457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 07:23:21.542489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 07:23:21.542601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 07:23:21.542632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 07:23:21.542881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 07:23:21.542914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 07:23:21.543191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 07:23:21.543225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 07:23:21.543396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 07:23:21.543428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 07:23:21.543592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 07:23:21.543626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 07:23:21.543746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 07:23:21.543778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 07:23:21.543963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 07:23:21.543996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 07:23:21.544115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 07:23:21.544147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 07:23:21.544280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 07:23:21.544312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 07:23:21.544486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 07:23:21.544518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 07:23:21.544709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 07:23:21.544742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 07:23:21.544857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 07:23:21.544895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 07:23:21.545133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 07:23:21.545168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 07:23:21.545381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 07:23:21.545413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 07:23:21.545603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 07:23:21.545634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 07:23:21.545896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 07:23:21.545930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 07:23:21.546146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 07:23:21.546180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 07:23:21.546421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 07:23:21.546452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 07:23:21.546641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 07:23:21.546673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 07:23:21.546812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 07:23:21.546844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 07:23:21.547032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 07:23:21.547067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 07:23:21.547244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 07:23:21.547279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 07:23:21.547460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 07:23:21.547492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.070 [2024-11-20 07:23:21.547758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.070 [2024-11-20 07:23:21.547789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.070 qpair failed and we were unable to recover it. 00:27:17.359 [2024-11-20 07:23:21.547908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.359 [2024-11-20 07:23:21.547939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.359 qpair failed and we were unable to recover it. 00:27:17.359 [2024-11-20 07:23:21.548141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.359 [2024-11-20 07:23:21.548180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.359 qpair failed and we were unable to recover it. 00:27:17.359 [2024-11-20 07:23:21.548379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.359 [2024-11-20 07:23:21.548411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.359 qpair failed and we were unable to recover it. 00:27:17.359 [2024-11-20 07:23:21.548649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.359 [2024-11-20 07:23:21.548681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.359 qpair failed and we were unable to recover it. 00:27:17.359 [2024-11-20 07:23:21.548960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.359 [2024-11-20 07:23:21.548995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.359 qpair failed and we were unable to recover it. 00:27:17.359 [2024-11-20 07:23:21.549256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.359 [2024-11-20 07:23:21.549291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.359 qpair failed and we were unable to recover it. 00:27:17.359 [2024-11-20 07:23:21.549508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.359 [2024-11-20 07:23:21.549546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.359 qpair failed and we were unable to recover it. 00:27:17.359 [2024-11-20 07:23:21.549738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.359 [2024-11-20 07:23:21.549772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.360 qpair failed and we were unable to recover it. 00:27:17.360 [2024-11-20 07:23:21.549995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.360 [2024-11-20 07:23:21.550032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.360 qpair failed and we were unable to recover it. 00:27:17.360 [2024-11-20 07:23:21.550157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.360 [2024-11-20 07:23:21.550190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.360 qpair failed and we were unable to recover it. 00:27:17.360 [2024-11-20 07:23:21.550448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.360 [2024-11-20 07:23:21.550483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.360 qpair failed and we were unable to recover it. 00:27:17.360 [2024-11-20 07:23:21.550692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.360 [2024-11-20 07:23:21.550727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.360 qpair failed and we were unable to recover it. 00:27:17.360 [2024-11-20 07:23:21.550844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.360 [2024-11-20 07:23:21.550876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.360 qpair failed and we were unable to recover it. 00:27:17.360 [2024-11-20 07:23:21.551117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.360 [2024-11-20 07:23:21.551153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.360 qpair failed and we were unable to recover it. 00:27:17.360 [2024-11-20 07:23:21.551351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.360 [2024-11-20 07:23:21.551384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.360 qpair failed and we were unable to recover it. 00:27:17.360 [2024-11-20 07:23:21.551568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.360 [2024-11-20 07:23:21.551602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.360 qpair failed and we were unable to recover it. 00:27:17.360 [2024-11-20 07:23:21.551719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.360 [2024-11-20 07:23:21.551753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.360 qpair failed and we were unable to recover it. 00:27:17.360 [2024-11-20 07:23:21.551960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.360 [2024-11-20 07:23:21.551994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.360 qpair failed and we were unable to recover it. 00:27:17.360 [2024-11-20 07:23:21.552240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.360 [2024-11-20 07:23:21.552274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.360 qpair failed and we were unable to recover it. 00:27:17.360 [2024-11-20 07:23:21.552456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.360 [2024-11-20 07:23:21.552489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.360 qpair failed and we were unable to recover it. 00:27:17.360 [2024-11-20 07:23:21.552675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.360 [2024-11-20 07:23:21.552706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.360 qpair failed and we were unable to recover it. 00:27:17.360 [2024-11-20 07:23:21.552977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.360 [2024-11-20 07:23:21.553012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.360 qpair failed and we were unable to recover it. 00:27:17.360 [2024-11-20 07:23:21.553255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.360 [2024-11-20 07:23:21.553287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.360 qpair failed and we were unable to recover it. 00:27:17.360 [2024-11-20 07:23:21.553480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.360 [2024-11-20 07:23:21.553515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.360 qpair failed and we were unable to recover it. 00:27:17.360 [2024-11-20 07:23:21.553705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.360 [2024-11-20 07:23:21.553737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.360 qpair failed and we were unable to recover it. 00:27:17.360 [2024-11-20 07:23:21.553940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.360 [2024-11-20 07:23:21.553997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.360 qpair failed and we were unable to recover it. 00:27:17.360 [2024-11-20 07:23:21.554208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.360 [2024-11-20 07:23:21.554240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.360 qpair failed and we were unable to recover it. 00:27:17.360 [2024-11-20 07:23:21.554423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.360 [2024-11-20 07:23:21.554454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.360 qpair failed and we were unable to recover it. 00:27:17.360 [2024-11-20 07:23:21.554628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.360 [2024-11-20 07:23:21.554669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.360 qpair failed and we were unable to recover it. 00:27:17.360 [2024-11-20 07:23:21.554787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.360 [2024-11-20 07:23:21.554820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.360 qpair failed and we were unable to recover it. 00:27:17.360 [2024-11-20 07:23:21.555013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.360 [2024-11-20 07:23:21.555048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.360 qpair failed and we were unable to recover it. 00:27:17.360 [2024-11-20 07:23:21.555240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.360 [2024-11-20 07:23:21.555272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.360 qpair failed and we were unable to recover it. 00:27:17.360 [2024-11-20 07:23:21.555480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.360 [2024-11-20 07:23:21.555512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.360 qpair failed and we were unable to recover it. 00:27:17.360 [2024-11-20 07:23:21.555685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.360 [2024-11-20 07:23:21.555716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.360 qpair failed and we were unable to recover it. 00:27:17.360 [2024-11-20 07:23:21.555900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.360 [2024-11-20 07:23:21.555932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.360 qpair failed and we were unable to recover it. 00:27:17.360 [2024-11-20 07:23:21.556070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.360 [2024-11-20 07:23:21.556103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.360 qpair failed and we were unable to recover it. 00:27:17.360 [2024-11-20 07:23:21.556287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.360 [2024-11-20 07:23:21.556318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.360 qpair failed and we were unable to recover it. 00:27:17.360 [2024-11-20 07:23:21.556552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.360 [2024-11-20 07:23:21.556584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.360 qpair failed and we were unable to recover it. 00:27:17.360 [2024-11-20 07:23:21.556724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.360 [2024-11-20 07:23:21.556757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.360 qpair failed and we were unable to recover it. 00:27:17.360 [2024-11-20 07:23:21.556967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.361 [2024-11-20 07:23:21.556999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.361 qpair failed and we were unable to recover it. 00:27:17.361 [2024-11-20 07:23:21.557115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.361 [2024-11-20 07:23:21.557147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.361 qpair failed and we were unable to recover it. 00:27:17.361 [2024-11-20 07:23:21.557388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.361 [2024-11-20 07:23:21.557420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.361 qpair failed and we were unable to recover it. 00:27:17.361 [2024-11-20 07:23:21.557609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.361 [2024-11-20 07:23:21.557641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.361 qpair failed and we were unable to recover it. 00:27:17.361 [2024-11-20 07:23:21.557832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.361 [2024-11-20 07:23:21.557864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.361 qpair failed and we were unable to recover it. 00:27:17.361 [2024-11-20 07:23:21.557994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.361 [2024-11-20 07:23:21.558028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.361 qpair failed and we were unable to recover it. 00:27:17.361 [2024-11-20 07:23:21.558197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.361 [2024-11-20 07:23:21.558229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.361 qpair failed and we were unable to recover it. 00:27:17.361 [2024-11-20 07:23:21.558414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.361 [2024-11-20 07:23:21.558445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.361 qpair failed and we were unable to recover it. 00:27:17.361 [2024-11-20 07:23:21.558709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.361 [2024-11-20 07:23:21.558740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.361 qpair failed and we were unable to recover it. 00:27:17.361 [2024-11-20 07:23:21.558956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.361 [2024-11-20 07:23:21.558990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.361 qpair failed and we were unable to recover it. 00:27:17.361 [2024-11-20 07:23:21.559235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.361 [2024-11-20 07:23:21.559267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.361 qpair failed and we were unable to recover it. 00:27:17.361 [2024-11-20 07:23:21.559517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.361 [2024-11-20 07:23:21.559549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.361 qpair failed and we were unable to recover it. 00:27:17.361 [2024-11-20 07:23:21.559718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.361 [2024-11-20 07:23:21.559751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.361 qpair failed and we were unable to recover it. 00:27:17.361 [2024-11-20 07:23:21.560014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.361 [2024-11-20 07:23:21.560048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.361 qpair failed and we were unable to recover it. 00:27:17.361 [2024-11-20 07:23:21.560282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.361 [2024-11-20 07:23:21.560314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.361 qpair failed and we were unable to recover it. 00:27:17.361 [2024-11-20 07:23:21.560501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.361 [2024-11-20 07:23:21.560533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.361 qpair failed and we were unable to recover it. 00:27:17.361 [2024-11-20 07:23:21.560666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.361 [2024-11-20 07:23:21.560697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.361 qpair failed and we were unable to recover it. 00:27:17.361 [2024-11-20 07:23:21.560824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.361 [2024-11-20 07:23:21.560856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.361 qpair failed and we were unable to recover it. 00:27:17.361 [2024-11-20 07:23:21.561047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.361 [2024-11-20 07:23:21.561081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.361 qpair failed and we were unable to recover it. 00:27:17.361 [2024-11-20 07:23:21.561199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.361 [2024-11-20 07:23:21.561231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.361 qpair failed and we were unable to recover it. 00:27:17.361 [2024-11-20 07:23:21.561517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.361 [2024-11-20 07:23:21.561549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.361 qpair failed and we were unable to recover it. 00:27:17.361 [2024-11-20 07:23:21.561669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.361 [2024-11-20 07:23:21.561701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.361 qpair failed and we were unable to recover it. 00:27:17.361 [2024-11-20 07:23:21.561989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.361 [2024-11-20 07:23:21.562022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.361 qpair failed and we were unable to recover it. 00:27:17.361 [2024-11-20 07:23:21.562142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.361 [2024-11-20 07:23:21.562174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.361 qpair failed and we were unable to recover it. 00:27:17.361 [2024-11-20 07:23:21.562292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.361 [2024-11-20 07:23:21.562323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.361 qpair failed and we were unable to recover it. 00:27:17.361 [2024-11-20 07:23:21.562559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.361 [2024-11-20 07:23:21.562591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.361 qpair failed and we were unable to recover it. 00:27:17.361 [2024-11-20 07:23:21.562862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.361 [2024-11-20 07:23:21.562893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.361 qpair failed and we were unable to recover it. 00:27:17.361 [2024-11-20 07:23:21.563092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.361 [2024-11-20 07:23:21.563125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.361 qpair failed and we were unable to recover it. 00:27:17.361 [2024-11-20 07:23:21.563316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.361 [2024-11-20 07:23:21.563348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.361 qpair failed and we were unable to recover it. 00:27:17.361 [2024-11-20 07:23:21.563596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.361 [2024-11-20 07:23:21.563628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.361 qpair failed and we were unable to recover it. 00:27:17.361 [2024-11-20 07:23:21.563817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.361 [2024-11-20 07:23:21.563849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.361 qpair failed and we were unable to recover it. 00:27:17.361 [2024-11-20 07:23:21.564051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.361 [2024-11-20 07:23:21.564084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.361 qpair failed and we were unable to recover it. 00:27:17.362 [2024-11-20 07:23:21.564299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.362 [2024-11-20 07:23:21.564330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.362 qpair failed and we were unable to recover it. 00:27:17.362 [2024-11-20 07:23:21.564523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.362 [2024-11-20 07:23:21.564554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.362 qpair failed and we were unable to recover it. 00:27:17.362 [2024-11-20 07:23:21.564793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.362 [2024-11-20 07:23:21.564825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.362 qpair failed and we were unable to recover it. 00:27:17.362 [2024-11-20 07:23:21.564980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.362 [2024-11-20 07:23:21.565012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.362 qpair failed and we were unable to recover it. 00:27:17.362 [2024-11-20 07:23:21.565184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.362 [2024-11-20 07:23:21.565215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.362 qpair failed and we were unable to recover it. 00:27:17.362 [2024-11-20 07:23:21.565403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.362 [2024-11-20 07:23:21.565434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.362 qpair failed and we were unable to recover it. 00:27:17.362 [2024-11-20 07:23:21.565569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.362 [2024-11-20 07:23:21.565600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.362 qpair failed and we were unable to recover it. 00:27:17.362 [2024-11-20 07:23:21.565795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.362 [2024-11-20 07:23:21.565826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.362 qpair failed and we were unable to recover it. 00:27:17.362 [2024-11-20 07:23:21.566010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.362 [2024-11-20 07:23:21.566042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.362 qpair failed and we were unable to recover it. 00:27:17.362 [2024-11-20 07:23:21.566280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.362 [2024-11-20 07:23:21.566312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.362 qpair failed and we were unable to recover it. 00:27:17.362 [2024-11-20 07:23:21.566490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.362 [2024-11-20 07:23:21.566521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.362 qpair failed and we were unable to recover it. 00:27:17.362 [2024-11-20 07:23:21.566700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.362 [2024-11-20 07:23:21.566731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.362 qpair failed and we were unable to recover it. 00:27:17.362 [2024-11-20 07:23:21.567005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.362 [2024-11-20 07:23:21.567038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.362 qpair failed and we were unable to recover it. 00:27:17.362 [2024-11-20 07:23:21.567250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.362 [2024-11-20 07:23:21.567282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.362 qpair failed and we were unable to recover it. 00:27:17.362 [2024-11-20 07:23:21.567452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.362 [2024-11-20 07:23:21.567483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.362 qpair failed and we were unable to recover it. 00:27:17.362 [2024-11-20 07:23:21.567611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.362 [2024-11-20 07:23:21.567642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.362 qpair failed and we were unable to recover it. 00:27:17.362 [2024-11-20 07:23:21.567816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.362 [2024-11-20 07:23:21.567847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.362 qpair failed and we were unable to recover it. 00:27:17.362 [2024-11-20 07:23:21.568031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.362 [2024-11-20 07:23:21.568064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.362 qpair failed and we were unable to recover it. 00:27:17.362 [2024-11-20 07:23:21.568261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.362 [2024-11-20 07:23:21.568292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.362 qpair failed and we were unable to recover it. 00:27:17.362 [2024-11-20 07:23:21.568415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.362 [2024-11-20 07:23:21.568446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.362 qpair failed and we were unable to recover it. 00:27:17.362 [2024-11-20 07:23:21.568638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.362 [2024-11-20 07:23:21.568670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.362 qpair failed and we were unable to recover it. 00:27:17.362 [2024-11-20 07:23:21.568847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.362 [2024-11-20 07:23:21.568878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.362 qpair failed and we were unable to recover it. 00:27:17.362 [2024-11-20 07:23:21.569069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.362 [2024-11-20 07:23:21.569101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.362 qpair failed and we were unable to recover it. 00:27:17.362 [2024-11-20 07:23:21.569316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.362 [2024-11-20 07:23:21.569347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.362 qpair failed and we were unable to recover it. 00:27:17.362 [2024-11-20 07:23:21.569632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.362 [2024-11-20 07:23:21.569663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.362 qpair failed and we were unable to recover it. 00:27:17.362 [2024-11-20 07:23:21.569778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.362 [2024-11-20 07:23:21.569815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.362 qpair failed and we were unable to recover it. 00:27:17.362 [2024-11-20 07:23:21.569992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.362 [2024-11-20 07:23:21.570025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.362 qpair failed and we were unable to recover it. 00:27:17.362 [2024-11-20 07:23:21.570259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.362 [2024-11-20 07:23:21.570291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.362 qpair failed and we were unable to recover it. 00:27:17.362 [2024-11-20 07:23:21.570555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.362 [2024-11-20 07:23:21.570586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.362 qpair failed and we were unable to recover it. 00:27:17.362 [2024-11-20 07:23:21.570711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.362 [2024-11-20 07:23:21.570743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.362 qpair failed and we were unable to recover it. 00:27:17.362 [2024-11-20 07:23:21.570921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.362 [2024-11-20 07:23:21.570959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.362 qpair failed and we were unable to recover it. 00:27:17.362 [2024-11-20 07:23:21.571154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.362 [2024-11-20 07:23:21.571186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.362 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 07:23:21.571368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 07:23:21.571398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 07:23:21.571578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 07:23:21.571609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 07:23:21.571844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 07:23:21.571876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 07:23:21.572007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 07:23:21.572040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 07:23:21.572214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 07:23:21.572245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 07:23:21.572436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 07:23:21.572467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 07:23:21.572602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 07:23:21.572632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 07:23:21.572896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 07:23:21.572928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 07:23:21.573215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 07:23:21.573248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 07:23:21.573387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 07:23:21.573418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 07:23:21.573675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 07:23:21.573706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 07:23:21.573987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 07:23:21.574020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 07:23:21.574127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 07:23:21.574158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 07:23:21.574369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 07:23:21.574400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 07:23:21.574604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 07:23:21.574636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 07:23:21.574877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 07:23:21.574908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 07:23:21.575101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 07:23:21.575134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 07:23:21.575324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 07:23:21.575355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 07:23:21.575547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 07:23:21.575579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 07:23:21.575781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 07:23:21.575813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 07:23:21.576101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 07:23:21.576145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 07:23:21.576263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 07:23:21.576295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 07:23:21.576476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 07:23:21.576508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 07:23:21.576722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 07:23:21.576754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 07:23:21.576882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 07:23:21.576912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 07:23:21.577091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 07:23:21.577123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 07:23:21.577362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 07:23:21.577394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 07:23:21.577566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 07:23:21.577598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 07:23:21.577726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 07:23:21.577756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 07:23:21.577929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 07:23:21.577990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 07:23:21.578244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.363 [2024-11-20 07:23:21.578275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.363 qpair failed and we were unable to recover it. 00:27:17.363 [2024-11-20 07:23:21.578448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 07:23:21.578480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 07:23:21.578620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 07:23:21.578651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 07:23:21.578851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 07:23:21.578882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 07:23:21.579075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 07:23:21.579109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 07:23:21.579347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 07:23:21.579379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 07:23:21.579618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 07:23:21.579649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 07:23:21.579829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 07:23:21.579861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 07:23:21.580048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 07:23:21.580082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 07:23:21.580258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 07:23:21.580290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 07:23:21.580586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 07:23:21.580618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 07:23:21.580803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 07:23:21.580835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 07:23:21.581146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 07:23:21.581178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 07:23:21.581355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 07:23:21.581387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 07:23:21.581568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 07:23:21.581599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 07:23:21.581862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 07:23:21.581893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 07:23:21.582171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 07:23:21.582204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 07:23:21.582465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 07:23:21.582503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 07:23:21.582621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 07:23:21.582655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 07:23:21.582895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 07:23:21.582927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 07:23:21.583205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 07:23:21.583236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 07:23:21.583414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 07:23:21.583447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 07:23:21.583640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 07:23:21.583671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 07:23:21.583925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 07:23:21.583967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 07:23:21.584236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 07:23:21.584267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 07:23:21.584393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 07:23:21.584424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 07:23:21.584540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 07:23:21.584573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 07:23:21.584762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 07:23:21.584794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 07:23:21.585055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 07:23:21.585089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 07:23:21.585207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 07:23:21.585238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 07:23:21.585437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 07:23:21.585469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 07:23:21.585592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 07:23:21.585625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 07:23:21.585757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.364 [2024-11-20 07:23:21.585787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.364 qpair failed and we were unable to recover it. 00:27:17.364 [2024-11-20 07:23:21.586054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 07:23:21.586088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 07:23:21.586275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 07:23:21.586307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 07:23:21.586495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 07:23:21.586527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 07:23:21.586765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 07:23:21.586797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 07:23:21.586969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 07:23:21.587001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 07:23:21.587242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 07:23:21.587275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 07:23:21.587457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 07:23:21.587489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 07:23:21.587760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 07:23:21.587791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 07:23:21.588054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 07:23:21.588087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 07:23:21.588277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 07:23:21.588308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 07:23:21.588496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 07:23:21.588527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 07:23:21.588713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 07:23:21.588744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 07:23:21.588922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 07:23:21.588973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 07:23:21.589183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 07:23:21.589215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 07:23:21.589352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 07:23:21.589383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 07:23:21.589484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 07:23:21.589515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 07:23:21.589702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 07:23:21.589733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 07:23:21.589847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 07:23:21.589878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 07:23:21.590145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 07:23:21.590179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 07:23:21.590424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 07:23:21.590455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 07:23:21.590724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 07:23:21.590755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 07:23:21.591006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 07:23:21.591039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 07:23:21.591214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 07:23:21.591245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 07:23:21.591358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 07:23:21.591389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 07:23:21.591563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 07:23:21.591594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 07:23:21.591765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 07:23:21.591802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 07:23:21.592048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 07:23:21.592080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.365 qpair failed and we were unable to recover it. 00:27:17.365 [2024-11-20 07:23:21.592203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.365 [2024-11-20 07:23:21.592234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 07:23:21.592474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 07:23:21.592505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 07:23:21.592710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 07:23:21.592741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 07:23:21.592878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 07:23:21.592909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 07:23:21.593020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 07:23:21.593051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 07:23:21.593273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 07:23:21.593305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 07:23:21.593508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 07:23:21.593538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 07:23:21.593667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 07:23:21.593697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 07:23:21.593886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 07:23:21.593918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 07:23:21.594145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 07:23:21.594176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 07:23:21.594290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 07:23:21.594322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 07:23:21.594504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 07:23:21.594535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 07:23:21.594731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 07:23:21.594763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 07:23:21.594945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 07:23:21.594989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 07:23:21.595224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 07:23:21.595255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 07:23:21.595371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 07:23:21.595402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 07:23:21.595644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 07:23:21.595676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 07:23:21.595864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 07:23:21.595895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 07:23:21.596093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 07:23:21.596126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 07:23:21.596298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 07:23:21.596329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 07:23:21.596449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 07:23:21.596481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 07:23:21.596663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 07:23:21.596693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 07:23:21.596823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 07:23:21.596855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 07:23:21.596968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 07:23:21.597001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 07:23:21.597127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 07:23:21.597159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 07:23:21.597348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 07:23:21.597386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 07:23:21.597565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 07:23:21.597596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 07:23:21.597779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 07:23:21.597810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 07:23:21.597987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 07:23:21.598020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 07:23:21.598214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 07:23:21.598245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 07:23:21.598485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 07:23:21.598516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 07:23:21.598635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.366 [2024-11-20 07:23:21.598666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.366 qpair failed and we were unable to recover it. 00:27:17.366 [2024-11-20 07:23:21.598840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 07:23:21.598871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 07:23:21.599045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 07:23:21.599077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 07:23:21.599247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 07:23:21.599279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 07:23:21.599408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 07:23:21.599439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 07:23:21.599574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 07:23:21.599605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 07:23:21.599727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 07:23:21.599758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 07:23:21.599931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 07:23:21.599972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 07:23:21.600098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 07:23:21.600129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 07:23:21.600367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 07:23:21.600398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 07:23:21.600531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 07:23:21.600562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 07:23:21.600734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 07:23:21.600765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 07:23:21.600882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 07:23:21.600913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 07:23:21.601038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 07:23:21.601071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 07:23:21.601243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 07:23:21.601274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 07:23:21.601456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 07:23:21.601487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 07:23:21.601661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 07:23:21.601692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 07:23:21.601876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 07:23:21.601907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 07:23:21.602139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 07:23:21.602172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 07:23:21.602374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 07:23:21.602406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 07:23:21.602513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 07:23:21.602545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 07:23:21.602753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 07:23:21.602790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 07:23:21.602911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 07:23:21.602943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 07:23:21.603128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 07:23:21.603159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 07:23:21.603349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 07:23:21.603381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 07:23:21.603639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 07:23:21.603670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 07:23:21.603858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 07:23:21.603890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 07:23:21.604011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 07:23:21.604043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 07:23:21.604280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 07:23:21.604312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 07:23:21.604550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 07:23:21.604582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 07:23:21.604812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 07:23:21.604844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 07:23:21.605034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 07:23:21.605067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 07:23:21.605259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.367 [2024-11-20 07:23:21.605292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.367 qpair failed and we were unable to recover it. 00:27:17.367 [2024-11-20 07:23:21.605474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 07:23:21.605506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 07:23:21.605611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 07:23:21.605642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 07:23:21.605891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 07:23:21.605923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 07:23:21.606085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 07:23:21.606118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 07:23:21.606261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 07:23:21.606293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 07:23:21.606417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 07:23:21.606449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 07:23:21.606562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 07:23:21.606594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 07:23:21.606811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 07:23:21.606844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 07:23:21.607110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 07:23:21.607143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 07:23:21.607347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 07:23:21.607379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 07:23:21.607634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 07:23:21.607665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 07:23:21.607842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 07:23:21.607874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 07:23:21.608059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 07:23:21.608093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 07:23:21.608293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 07:23:21.608325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 07:23:21.608542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 07:23:21.608574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 07:23:21.608835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 07:23:21.608866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 07:23:21.609060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 07:23:21.609093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 07:23:21.609299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 07:23:21.609331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 07:23:21.609570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 07:23:21.609602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 07:23:21.609725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 07:23:21.609756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 07:23:21.609997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 07:23:21.610031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 07:23:21.610147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 07:23:21.610178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 07:23:21.610347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 07:23:21.610379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 07:23:21.610499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 07:23:21.610531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 07:23:21.610655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 07:23:21.610687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 07:23:21.610890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 07:23:21.610923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 07:23:21.611111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 07:23:21.611143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 07:23:21.611257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 07:23:21.611289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 07:23:21.611475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 07:23:21.611507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 07:23:21.611616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 07:23:21.611649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 07:23:21.611785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 07:23:21.611817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 07:23:21.612025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.368 [2024-11-20 07:23:21.612058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.368 qpair failed and we were unable to recover it. 00:27:17.368 [2024-11-20 07:23:21.612230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 07:23:21.612263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 07:23:21.612453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 07:23:21.612484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 07:23:21.612605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 07:23:21.612637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 07:23:21.612820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 07:23:21.612852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 07:23:21.613038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 07:23:21.613071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 07:23:21.613245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 07:23:21.613277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 07:23:21.613458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 07:23:21.613490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 07:23:21.613622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 07:23:21.613653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 07:23:21.613758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 07:23:21.613790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 07:23:21.614034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 07:23:21.614066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 07:23:21.614325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 07:23:21.614357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 07:23:21.614489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 07:23:21.614521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 07:23:21.614632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 07:23:21.614665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 07:23:21.614957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 07:23:21.614990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 07:23:21.615225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 07:23:21.615258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 07:23:21.615432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 07:23:21.615463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 07:23:21.615635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 07:23:21.615666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 07:23:21.615839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 07:23:21.615871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 07:23:21.616058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 07:23:21.616091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 07:23:21.616275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 07:23:21.616306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 07:23:21.616426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 07:23:21.616457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 07:23:21.616664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 07:23:21.616696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 07:23:21.616936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 07:23:21.616978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 07:23:21.617111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 07:23:21.617141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 07:23:21.617257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 07:23:21.617295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 07:23:21.617422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 07:23:21.617454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 07:23:21.617639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 07:23:21.617670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 07:23:21.617791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 07:23:21.617821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 07:23:21.617936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 07:23:21.617977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 07:23:21.618160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 07:23:21.618191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 07:23:21.618318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 07:23:21.618349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 07:23:21.618585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.369 [2024-11-20 07:23:21.618617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.369 qpair failed and we were unable to recover it. 00:27:17.369 [2024-11-20 07:23:21.618789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 07:23:21.618820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 07:23:21.618930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 07:23:21.618969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 07:23:21.619206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 07:23:21.619237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 07:23:21.619353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 07:23:21.619384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 07:23:21.619508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 07:23:21.619539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 07:23:21.619673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 07:23:21.619704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 07:23:21.619901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 07:23:21.619932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 07:23:21.620193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 07:23:21.620224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 07:23:21.620408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 07:23:21.620439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 07:23:21.620674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 07:23:21.620705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 07:23:21.620903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 07:23:21.620935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 07:23:21.621157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 07:23:21.621189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 07:23:21.621369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 07:23:21.621408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 07:23:21.621526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 07:23:21.621559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 07:23:21.621749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 07:23:21.621780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 07:23:21.621971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 07:23:21.622006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 07:23:21.622265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 07:23:21.622300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 07:23:21.622486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 07:23:21.622520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 07:23:21.622713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 07:23:21.622749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 07:23:21.622993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 07:23:21.623032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 07:23:21.623317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 07:23:21.623348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 07:23:21.623589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 07:23:21.623622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 07:23:21.623839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 07:23:21.623872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 07:23:21.624064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 07:23:21.624098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 07:23:21.624361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 07:23:21.624394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 07:23:21.624529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 07:23:21.624561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 07:23:21.624743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 07:23:21.624775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.370 [2024-11-20 07:23:21.624967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.370 [2024-11-20 07:23:21.624999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.370 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 07:23:21.625202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 07:23:21.625233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 07:23:21.625493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 07:23:21.625525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 07:23:21.625644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 07:23:21.625676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 07:23:21.625847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 07:23:21.625879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 07:23:21.625997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 07:23:21.626030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 07:23:21.626215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 07:23:21.626247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 07:23:21.626382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 07:23:21.626412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 07:23:21.626523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 07:23:21.626554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 07:23:21.626742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 07:23:21.626774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 07:23:21.627015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 07:23:21.627048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 07:23:21.627232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 07:23:21.627264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 07:23:21.627448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 07:23:21.627480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 07:23:21.627664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 07:23:21.627696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 07:23:21.627882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 07:23:21.627914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 07:23:21.628131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 07:23:21.628164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 07:23:21.628404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 07:23:21.628436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 07:23:21.628637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 07:23:21.628679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 07:23:21.628867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 07:23:21.628898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 07:23:21.629092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 07:23:21.629130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 07:23:21.629391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 07:23:21.629423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 07:23:21.629545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 07:23:21.629577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 07:23:21.629823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 07:23:21.629854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 07:23:21.630069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 07:23:21.630103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 07:23:21.630285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 07:23:21.630317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 07:23:21.630554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 07:23:21.630586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 07:23:21.630845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 07:23:21.630877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 07:23:21.631057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 07:23:21.631090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 07:23:21.631361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 07:23:21.631393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 07:23:21.631593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 07:23:21.631625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 07:23:21.631817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 07:23:21.631849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 07:23:21.631982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 07:23:21.632014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.371 [2024-11-20 07:23:21.632206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.371 [2024-11-20 07:23:21.632238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.371 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 07:23:21.632517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 07:23:21.632587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 07:23:21.632886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 07:23:21.632922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 07:23:21.633131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 07:23:21.633164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 07:23:21.633359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 07:23:21.633390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 07:23:21.633610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 07:23:21.633642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 07:23:21.633830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 07:23:21.633862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 07:23:21.634077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 07:23:21.634110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 07:23:21.634368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 07:23:21.634399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 07:23:21.634661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 07:23:21.634693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 07:23:21.634879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 07:23:21.634909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 07:23:21.635115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 07:23:21.635148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 07:23:21.635337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 07:23:21.635369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 07:23:21.635633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 07:23:21.635665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 07:23:21.635855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 07:23:21.635896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 07:23:21.636172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 07:23:21.636205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 07:23:21.636403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 07:23:21.636434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 07:23:21.636573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 07:23:21.636604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 07:23:21.636783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 07:23:21.636814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 07:23:21.636997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 07:23:21.637030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 07:23:21.637210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 07:23:21.637242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 07:23:21.637374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 07:23:21.637404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 07:23:21.637642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 07:23:21.637673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 07:23:21.637880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 07:23:21.637911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 07:23:21.638044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 07:23:21.638076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 07:23:21.638340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 07:23:21.638371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 07:23:21.638555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 07:23:21.638586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 07:23:21.638769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 07:23:21.638800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 07:23:21.638983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 07:23:21.639016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 07:23:21.639203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 07:23:21.639235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 07:23:21.639370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 07:23:21.639401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 07:23:21.639662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.372 [2024-11-20 07:23:21.639693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.372 qpair failed and we were unable to recover it. 00:27:17.372 [2024-11-20 07:23:21.639872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 07:23:21.639904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 07:23:21.640048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 07:23:21.640081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 07:23:21.640272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 07:23:21.640304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 07:23:21.640487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 07:23:21.640518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 07:23:21.640694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 07:23:21.640725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 07:23:21.640912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 07:23:21.640944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 07:23:21.641201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 07:23:21.641232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 07:23:21.641484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 07:23:21.641515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 07:23:21.641645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 07:23:21.641676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 07:23:21.641932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 07:23:21.641975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 07:23:21.642107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 07:23:21.642139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 07:23:21.642313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 07:23:21.642345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 07:23:21.642531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 07:23:21.642562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 07:23:21.642746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 07:23:21.642778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 07:23:21.643036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 07:23:21.643069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 07:23:21.643257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 07:23:21.643289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 07:23:21.643498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 07:23:21.643529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 07:23:21.643660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 07:23:21.643690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 07:23:21.643863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 07:23:21.643894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 07:23:21.644021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 07:23:21.644053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 07:23:21.644233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 07:23:21.644263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 07:23:21.644472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 07:23:21.644503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 07:23:21.644686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 07:23:21.644724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 07:23:21.644852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 07:23:21.644883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 07:23:21.645068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 07:23:21.645101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 07:23:21.645305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 07:23:21.645336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 07:23:21.645508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 07:23:21.645539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 07:23:21.645788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 07:23:21.645820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 07:23:21.646036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 07:23:21.646086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 07:23:21.646337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 07:23:21.646368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 07:23:21.646484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 07:23:21.646515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.373 [2024-11-20 07:23:21.646645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.373 [2024-11-20 07:23:21.646677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.373 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 07:23:21.646813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 07:23:21.646844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 07:23:21.647080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 07:23:21.647113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 07:23:21.647222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 07:23:21.647253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 07:23:21.647515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 07:23:21.647546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 07:23:21.647837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 07:23:21.647869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 07:23:21.648061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 07:23:21.648094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 07:23:21.648217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 07:23:21.648248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 07:23:21.648455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 07:23:21.648485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 07:23:21.648657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 07:23:21.648688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 07:23:21.648957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 07:23:21.648990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 07:23:21.649177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 07:23:21.649209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 07:23:21.649458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 07:23:21.649490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 07:23:21.649624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 07:23:21.649655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 07:23:21.649911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 07:23:21.649942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 07:23:21.650170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 07:23:21.650202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 07:23:21.650413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 07:23:21.650444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 07:23:21.650632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 07:23:21.650663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 07:23:21.650856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 07:23:21.650887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 07:23:21.651015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 07:23:21.651048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 07:23:21.651258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 07:23:21.651290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 07:23:21.651482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 07:23:21.651513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 07:23:21.651694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 07:23:21.651725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 07:23:21.651837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 07:23:21.651867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 07:23:21.651987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 07:23:21.652020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 07:23:21.652204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 07:23:21.652235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 07:23:21.652475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 07:23:21.652506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 07:23:21.652683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 07:23:21.652714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 07:23:21.652850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 07:23:21.652882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 07:23:21.653078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 07:23:21.653111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 07:23:21.653352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 07:23:21.653384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 07:23:21.653490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.374 [2024-11-20 07:23:21.653527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.374 qpair failed and we were unable to recover it. 00:27:17.374 [2024-11-20 07:23:21.653715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 07:23:21.653747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 07:23:21.653876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 07:23:21.653907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 07:23:21.654114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 07:23:21.654145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 07:23:21.654315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 07:23:21.654347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 07:23:21.654539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 07:23:21.654569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 07:23:21.654741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 07:23:21.654772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 07:23:21.655010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 07:23:21.655043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 07:23:21.655168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 07:23:21.655199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 07:23:21.655320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 07:23:21.655351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 07:23:21.655533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 07:23:21.655563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 07:23:21.655736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 07:23:21.655767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 07:23:21.655898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 07:23:21.655928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 07:23:21.656143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 07:23:21.656175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 07:23:21.656420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 07:23:21.656452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 07:23:21.656573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 07:23:21.656604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 07:23:21.656797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 07:23:21.656828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 07:23:21.657025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 07:23:21.657058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 07:23:21.657170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 07:23:21.657201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 07:23:21.657410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 07:23:21.657440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 07:23:21.657612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 07:23:21.657644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 07:23:21.657756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 07:23:21.657787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 07:23:21.658035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 07:23:21.658067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 07:23:21.658251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 07:23:21.658284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 07:23:21.658520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 07:23:21.658551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 07:23:21.658757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 07:23:21.658788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 07:23:21.658975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 07:23:21.659007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 07:23:21.659298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 07:23:21.659370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 07:23:21.659524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 07:23:21.659561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 07:23:21.659699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 07:23:21.659731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 07:23:21.660013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 07:23:21.660048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 07:23:21.660231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 07:23:21.660263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.375 [2024-11-20 07:23:21.660399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.375 [2024-11-20 07:23:21.660430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.375 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 07:23:21.660716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 07:23:21.660748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 07:23:21.660934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 07:23:21.660976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 07:23:21.661155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 07:23:21.661186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 07:23:21.661369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 07:23:21.661400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 07:23:21.661524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 07:23:21.661556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 07:23:21.661742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 07:23:21.661773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 07:23:21.662037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 07:23:21.662069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 07:23:21.662255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 07:23:21.662286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 07:23:21.662408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 07:23:21.662440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 07:23:21.662615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 07:23:21.662646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 07:23:21.662771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 07:23:21.662802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 07:23:21.663016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 07:23:21.663049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 07:23:21.663262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 07:23:21.663294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 07:23:21.663477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 07:23:21.663508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 07:23:21.663717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 07:23:21.663750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 07:23:21.663869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 07:23:21.663901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 07:23:21.664102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 07:23:21.664135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 07:23:21.664319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 07:23:21.664350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 07:23:21.664587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 07:23:21.664617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 07:23:21.664738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 07:23:21.664770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 07:23:21.664968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 07:23:21.665001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 07:23:21.665138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 07:23:21.665176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 07:23:21.665301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 07:23:21.665333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 07:23:21.665451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 07:23:21.665483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 07:23:21.665742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 07:23:21.665773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 07:23:21.665969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 07:23:21.666002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 07:23:21.666227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 07:23:21.666258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.376 [2024-11-20 07:23:21.666376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.376 [2024-11-20 07:23:21.666407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.376 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 07:23:21.666519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 07:23:21.666551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 07:23:21.666725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 07:23:21.666756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 07:23:21.667014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 07:23:21.667046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 07:23:21.667250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 07:23:21.667281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 07:23:21.667487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 07:23:21.667517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 07:23:21.667653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 07:23:21.667684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 07:23:21.667958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 07:23:21.667991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 07:23:21.668236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 07:23:21.668268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 07:23:21.668462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 07:23:21.668494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 07:23:21.668613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 07:23:21.668645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 07:23:21.668891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 07:23:21.668923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 07:23:21.669062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 07:23:21.669094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 07:23:21.669331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 07:23:21.669363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 07:23:21.669539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 07:23:21.669570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 07:23:21.669758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 07:23:21.669789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 07:23:21.669972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 07:23:21.670005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 07:23:21.670193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 07:23:21.670224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 07:23:21.670335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 07:23:21.670366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 07:23:21.670603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 07:23:21.670635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 07:23:21.670818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 07:23:21.670849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 07:23:21.670976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 07:23:21.671015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 07:23:21.671136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 07:23:21.671168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 07:23:21.671408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 07:23:21.671439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 07:23:21.671702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 07:23:21.671734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 07:23:21.671839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 07:23:21.671870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 07:23:21.672052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 07:23:21.672085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 07:23:21.672259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 07:23:21.672290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 07:23:21.672459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 07:23:21.672490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 07:23:21.672729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 07:23:21.672761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 07:23:21.672998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 07:23:21.673031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 07:23:21.673245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 07:23:21.673275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.377 [2024-11-20 07:23:21.673455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.377 [2024-11-20 07:23:21.673486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.377 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 07:23:21.673610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 07:23:21.673641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 07:23:21.673884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 07:23:21.673915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 07:23:21.674115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 07:23:21.674148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 07:23:21.674330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 07:23:21.674361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 07:23:21.674554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 07:23:21.674585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 07:23:21.674713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 07:23:21.674745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 07:23:21.674916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 07:23:21.674956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 07:23:21.675144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 07:23:21.675174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 07:23:21.675350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 07:23:21.675382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 07:23:21.675661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 07:23:21.675692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 07:23:21.675981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 07:23:21.676014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 07:23:21.676143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 07:23:21.676175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 07:23:21.676410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 07:23:21.676442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 07:23:21.676705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 07:23:21.676736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 07:23:21.677003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 07:23:21.677035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 07:23:21.677160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 07:23:21.677192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 07:23:21.677387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 07:23:21.677418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 07:23:21.677698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 07:23:21.677729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 07:23:21.677921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 07:23:21.677961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 07:23:21.678197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 07:23:21.678229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 07:23:21.678400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 07:23:21.678432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 07:23:21.678678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 07:23:21.678708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 07:23:21.678883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 07:23:21.678914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 07:23:21.679155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 07:23:21.679188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 07:23:21.679377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 07:23:21.679408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 07:23:21.679624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 07:23:21.679655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 07:23:21.679941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 07:23:21.680001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 07:23:21.680194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 07:23:21.680225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 07:23:21.680408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 07:23:21.680440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 07:23:21.680635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 07:23:21.680667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 07:23:21.680928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 07:23:21.680972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.378 [2024-11-20 07:23:21.681163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.378 [2024-11-20 07:23:21.681195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.378 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 07:23:21.681445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 07:23:21.681476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 07:23:21.681664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 07:23:21.681696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 07:23:21.681805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 07:23:21.681836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 07:23:21.682011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 07:23:21.682043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 07:23:21.682160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 07:23:21.682192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 07:23:21.682314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 07:23:21.682345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 07:23:21.682588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 07:23:21.682620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 07:23:21.682811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 07:23:21.682842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 07:23:21.683022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 07:23:21.683055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 07:23:21.683254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 07:23:21.683285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 07:23:21.683523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 07:23:21.683554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 07:23:21.683748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 07:23:21.683780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 07:23:21.684022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 07:23:21.684054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 07:23:21.684187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 07:23:21.684218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 07:23:21.684400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 07:23:21.684430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 07:23:21.684550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 07:23:21.684581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 07:23:21.684865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 07:23:21.684896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 07:23:21.685087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 07:23:21.685118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 07:23:21.685292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 07:23:21.685324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 07:23:21.685565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 07:23:21.685596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 07:23:21.685815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 07:23:21.685847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 07:23:21.686020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 07:23:21.686053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 07:23:21.686291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 07:23:21.686322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 07:23:21.686512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 07:23:21.686543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 07:23:21.686679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 07:23:21.686716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 07:23:21.686889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 07:23:21.686921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 07:23:21.687115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 07:23:21.687147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 07:23:21.687267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 07:23:21.687298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 07:23:21.687488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 07:23:21.687520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 07:23:21.687698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 07:23:21.687729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 07:23:21.687918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 07:23:21.687977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.379 [2024-11-20 07:23:21.688237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.379 [2024-11-20 07:23:21.688269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.379 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 07:23:21.688530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 07:23:21.688562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 07:23:21.688747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 07:23:21.688779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 07:23:21.689021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 07:23:21.689053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 07:23:21.689242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 07:23:21.689273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 07:23:21.689487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 07:23:21.689517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 07:23:21.689695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 07:23:21.689726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 07:23:21.689944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 07:23:21.689998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 07:23:21.690119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 07:23:21.690151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 07:23:21.690339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 07:23:21.690371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 07:23:21.690554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 07:23:21.690586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 07:23:21.690713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 07:23:21.690744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 07:23:21.691008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 07:23:21.691042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 07:23:21.691217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 07:23:21.691249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 07:23:21.691418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 07:23:21.691450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 07:23:21.691580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 07:23:21.691611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 07:23:21.691785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 07:23:21.691816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 07:23:21.692006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 07:23:21.692039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 07:23:21.692281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 07:23:21.692312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 07:23:21.692548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 07:23:21.692579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 07:23:21.692826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 07:23:21.692869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 07:23:21.693047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 07:23:21.693079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 07:23:21.693276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 07:23:21.693307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 07:23:21.693568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 07:23:21.693599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 07:23:21.693788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 07:23:21.693820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 07:23:21.693934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 07:23:21.693971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 07:23:21.694183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 07:23:21.694213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 07:23:21.694396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 07:23:21.694427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 07:23:21.694663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 07:23:21.694693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 07:23:21.694878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 07:23:21.694910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 07:23:21.695059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 07:23:21.695092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 07:23:21.695201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 07:23:21.695232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.380 [2024-11-20 07:23:21.695443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.380 [2024-11-20 07:23:21.695474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.380 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 07:23:21.695670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 07:23:21.695701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 07:23:21.695893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 07:23:21.695925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 07:23:21.696245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 07:23:21.696277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 07:23:21.696451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 07:23:21.696483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 07:23:21.696747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 07:23:21.696777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 07:23:21.696970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 07:23:21.697003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 07:23:21.697127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 07:23:21.697157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 07:23:21.697408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 07:23:21.697439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 07:23:21.697565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 07:23:21.697596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 07:23:21.697784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 07:23:21.697815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 07:23:21.697991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 07:23:21.698023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 07:23:21.698213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 07:23:21.698244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 07:23:21.698422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 07:23:21.698454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 07:23:21.698577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 07:23:21.698608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 07:23:21.698784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 07:23:21.698821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 07:23:21.698935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 07:23:21.698974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 07:23:21.699171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 07:23:21.699202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 07:23:21.699440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 07:23:21.699471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 07:23:21.699653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 07:23:21.699685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 07:23:21.699891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 07:23:21.699922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 07:23:21.700140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 07:23:21.700172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 07:23:21.700363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 07:23:21.700395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 07:23:21.700604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 07:23:21.700635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 07:23:21.700765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 07:23:21.700795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 07:23:21.700979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 07:23:21.701012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 07:23:21.701184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 07:23:21.701215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 07:23:21.701395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 07:23:21.701426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.381 [2024-11-20 07:23:21.701684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.381 [2024-11-20 07:23:21.701715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.381 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 07:23:21.701967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 07:23:21.702001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 07:23:21.702194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 07:23:21.702225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 07:23:21.702484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 07:23:21.702515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 07:23:21.702620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 07:23:21.702652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 07:23:21.702842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 07:23:21.702873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 07:23:21.703047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 07:23:21.703079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 07:23:21.703247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 07:23:21.703278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 07:23:21.703518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 07:23:21.703549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 07:23:21.703757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 07:23:21.703788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 07:23:21.703980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 07:23:21.704013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 07:23:21.704183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 07:23:21.704213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 07:23:21.704392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 07:23:21.704423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 07:23:21.704660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 07:23:21.704692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 07:23:21.704967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 07:23:21.705000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 07:23:21.705141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 07:23:21.705172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 07:23:21.705367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 07:23:21.705400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 07:23:21.705522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 07:23:21.705553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 07:23:21.705840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 07:23:21.705872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 07:23:21.706055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 07:23:21.706088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 07:23:21.706263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 07:23:21.706294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 07:23:21.706508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 07:23:21.706539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 07:23:21.706724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 07:23:21.706756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 07:23:21.707021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 07:23:21.707053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 07:23:21.707237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 07:23:21.707268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 07:23:21.707384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 07:23:21.707415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 07:23:21.707680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 07:23:21.707712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 07:23:21.707901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 07:23:21.707932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 07:23:21.708095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 07:23:21.708128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 07:23:21.708307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 07:23:21.708338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 07:23:21.708512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 07:23:21.708543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 07:23:21.708664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 07:23:21.708695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.382 qpair failed and we were unable to recover it. 00:27:17.382 [2024-11-20 07:23:21.708904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.382 [2024-11-20 07:23:21.708935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 07:23:21.709190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 07:23:21.709222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 07:23:21.709343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 07:23:21.709374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 07:23:21.709597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 07:23:21.709628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 07:23:21.709799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 07:23:21.709830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 07:23:21.710066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 07:23:21.710099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 07:23:21.710347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 07:23:21.710379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 07:23:21.710616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 07:23:21.710647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 07:23:21.710826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 07:23:21.710857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 07:23:21.711044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 07:23:21.711077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 07:23:21.711188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 07:23:21.711219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 07:23:21.711460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 07:23:21.711492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 07:23:21.711679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 07:23:21.711710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 07:23:21.711888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 07:23:21.711919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 07:23:21.712049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 07:23:21.712082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 07:23:21.712207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 07:23:21.712238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 07:23:21.712474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 07:23:21.712505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 07:23:21.712688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 07:23:21.712719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 07:23:21.712983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 07:23:21.713016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 07:23:21.713190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 07:23:21.713221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 07:23:21.713483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 07:23:21.713514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 07:23:21.713648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 07:23:21.713680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 07:23:21.713940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 07:23:21.713984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 07:23:21.714168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 07:23:21.714205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 07:23:21.714399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 07:23:21.714431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 07:23:21.714555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 07:23:21.714586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 07:23:21.714777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 07:23:21.714809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 07:23:21.714992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 07:23:21.715026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 07:23:21.715234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 07:23:21.715266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 07:23:21.715391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 07:23:21.715423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 07:23:21.715710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 07:23:21.715742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 07:23:21.715927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.383 [2024-11-20 07:23:21.715978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.383 qpair failed and we were unable to recover it. 00:27:17.383 [2024-11-20 07:23:21.716095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 07:23:21.716126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 07:23:21.716318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 07:23:21.716350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 07:23:21.716539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 07:23:21.716571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 07:23:21.716674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 07:23:21.716706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 07:23:21.716893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 07:23:21.716926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 07:23:21.717179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 07:23:21.717211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 07:23:21.717401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 07:23:21.717433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 07:23:21.717561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 07:23:21.717592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 07:23:21.717777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 07:23:21.717809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 07:23:21.717911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 07:23:21.717944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 07:23:21.718074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 07:23:21.718105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 07:23:21.718240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 07:23:21.718271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 07:23:21.718398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 07:23:21.718430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 07:23:21.718627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 07:23:21.718658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 07:23:21.718779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 07:23:21.718810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 07:23:21.719034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 07:23:21.719067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 07:23:21.719306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 07:23:21.719337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 07:23:21.719514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 07:23:21.719545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 07:23:21.719727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 07:23:21.719765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 07:23:21.719883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 07:23:21.719914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 07:23:21.720180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 07:23:21.720213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 07:23:21.720324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 07:23:21.720356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 07:23:21.720475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 07:23:21.720508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 07:23:21.720743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 07:23:21.720776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 07:23:21.720966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 07:23:21.720999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 07:23:21.721124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 07:23:21.721156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 07:23:21.721343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 07:23:21.721375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 07:23:21.721620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 07:23:21.721651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 07:23:21.721774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 07:23:21.721805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 07:23:21.722044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 07:23:21.722077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 07:23:21.722288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 07:23:21.722319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 07:23:21.722509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 07:23:21.722541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.384 qpair failed and we were unable to recover it. 00:27:17.384 [2024-11-20 07:23:21.722765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.384 [2024-11-20 07:23:21.722797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 07:23:21.722921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 07:23:21.722961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 07:23:21.723097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 07:23:21.723128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 07:23:21.723255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 07:23:21.723286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 07:23:21.723390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 07:23:21.723422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 07:23:21.723683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 07:23:21.723715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 07:23:21.723846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 07:23:21.723876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 07:23:21.724006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 07:23:21.724039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 07:23:21.724230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 07:23:21.724262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 07:23:21.724456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 07:23:21.724488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 07:23:21.724656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 07:23:21.724687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 07:23:21.724805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 07:23:21.724837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 07:23:21.725017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 07:23:21.725050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 07:23:21.725179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 07:23:21.725210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 07:23:21.725453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 07:23:21.725485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 07:23:21.725744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 07:23:21.725776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 07:23:21.725897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 07:23:21.725928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 07:23:21.726136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 07:23:21.726169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 07:23:21.726358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 07:23:21.726389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 07:23:21.726594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 07:23:21.726626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 07:23:21.726736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 07:23:21.726767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 07:23:21.726971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 07:23:21.727003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 07:23:21.727172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 07:23:21.727204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 07:23:21.727331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 07:23:21.727362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 07:23:21.727545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 07:23:21.727576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 07:23:21.727768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 07:23:21.727804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 07:23:21.727999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 07:23:21.728033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 07:23:21.728282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 07:23:21.728314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 07:23:21.728442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 07:23:21.728473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 07:23:21.728662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 07:23:21.728693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 07:23:21.728889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 07:23:21.728922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 07:23:21.729063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.385 [2024-11-20 07:23:21.729096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.385 qpair failed and we were unable to recover it. 00:27:17.385 [2024-11-20 07:23:21.729279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 07:23:21.729311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 07:23:21.729549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 07:23:21.729582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 07:23:21.729759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 07:23:21.729791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 07:23:21.729971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 07:23:21.730003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 07:23:21.730121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 07:23:21.730153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 07:23:21.730390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 07:23:21.730421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 07:23:21.730660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 07:23:21.730691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 07:23:21.730962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 07:23:21.730995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 07:23:21.731198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 07:23:21.731229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 07:23:21.731445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 07:23:21.731477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 07:23:21.731685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 07:23:21.731716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 07:23:21.731858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 07:23:21.731889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 07:23:21.732085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 07:23:21.732119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 07:23:21.732254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 07:23:21.732286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 07:23:21.732476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 07:23:21.732507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 07:23:21.732715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 07:23:21.732746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 07:23:21.732963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 07:23:21.732996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 07:23:21.733171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 07:23:21.733203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 07:23:21.733374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 07:23:21.733406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 07:23:21.733579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 07:23:21.733609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 07:23:21.733742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 07:23:21.733774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 07:23:21.733985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 07:23:21.734018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 07:23:21.734257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 07:23:21.734295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 07:23:21.734424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 07:23:21.734455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 07:23:21.734648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 07:23:21.734679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 07:23:21.734812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 07:23:21.734843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 07:23:21.734973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 07:23:21.735006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 07:23:21.735142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 07:23:21.735173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 07:23:21.735304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 07:23:21.735335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 07:23:21.735525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 07:23:21.735556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 07:23:21.735753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 07:23:21.735785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 07:23:21.735982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.386 [2024-11-20 07:23:21.736014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.386 qpair failed and we were unable to recover it. 00:27:17.386 [2024-11-20 07:23:21.736130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 07:23:21.736162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 07:23:21.736287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 07:23:21.736318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 07:23:21.736599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 07:23:21.736630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 07:23:21.736892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 07:23:21.736924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 07:23:21.737068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 07:23:21.737101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 07:23:21.737274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 07:23:21.737306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 07:23:21.737434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 07:23:21.737464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 07:23:21.737709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 07:23:21.737741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 07:23:21.737938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 07:23:21.737980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 07:23:21.738166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 07:23:21.738197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 07:23:21.738388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 07:23:21.738419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 07:23:21.738605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 07:23:21.738636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 07:23:21.738772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 07:23:21.738803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 07:23:21.739037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 07:23:21.739070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 07:23:21.739278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 07:23:21.739310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 07:23:21.739519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 07:23:21.739550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 07:23:21.739740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 07:23:21.739772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 07:23:21.739996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 07:23:21.740035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 07:23:21.740219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 07:23:21.740251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 07:23:21.740514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 07:23:21.740544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 07:23:21.740730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 07:23:21.740761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 07:23:21.741007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 07:23:21.741040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 07:23:21.741155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 07:23:21.741187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 07:23:21.741364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 07:23:21.741396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 07:23:21.741577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 07:23:21.741608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 07:23:21.741719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 07:23:21.741750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 07:23:21.741988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 07:23:21.742020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 07:23:21.742137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 07:23:21.742168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.387 qpair failed and we were unable to recover it. 00:27:17.387 [2024-11-20 07:23:21.742408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.387 [2024-11-20 07:23:21.742439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 07:23:21.742542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 07:23:21.742573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 07:23:21.742753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 07:23:21.742784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 07:23:21.742931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 07:23:21.742972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 07:23:21.743172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 07:23:21.743203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 07:23:21.743376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 07:23:21.743408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 07:23:21.743620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 07:23:21.743651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 07:23:21.743894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 07:23:21.743925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 07:23:21.744136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 07:23:21.744168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 07:23:21.744386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 07:23:21.744417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 07:23:21.744629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 07:23:21.744660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 07:23:21.744789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 07:23:21.744820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 07:23:21.745058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 07:23:21.745092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 07:23:21.745276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 07:23:21.745307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 07:23:21.745490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 07:23:21.745520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 07:23:21.745784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 07:23:21.745816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 07:23:21.745954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 07:23:21.745992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 07:23:21.746184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 07:23:21.746216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 07:23:21.746479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 07:23:21.746511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 07:23:21.746644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 07:23:21.746676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 07:23:21.746853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 07:23:21.746885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 07:23:21.747013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 07:23:21.747046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 07:23:21.747158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 07:23:21.747189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 07:23:21.747451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 07:23:21.747482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 07:23:21.747742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 07:23:21.747773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 07:23:21.747967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 07:23:21.747999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 07:23:21.748171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 07:23:21.748202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 07:23:21.748333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 07:23:21.748364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 07:23:21.748548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 07:23:21.748578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 07:23:21.748711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 07:23:21.748742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 07:23:21.748921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 07:23:21.749010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 07:23:21.749171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.388 [2024-11-20 07:23:21.749206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.388 qpair failed and we were unable to recover it. 00:27:17.388 [2024-11-20 07:23:21.749476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 07:23:21.749509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 07:23:21.749696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 07:23:21.749728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 07:23:21.749927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 07:23:21.749973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 07:23:21.750176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 07:23:21.750207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 07:23:21.750403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 07:23:21.750434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 07:23:21.750715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 07:23:21.750746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 07:23:21.750856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 07:23:21.750887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 07:23:21.751067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 07:23:21.751100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 07:23:21.751272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 07:23:21.751303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 07:23:21.751563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 07:23:21.751594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 07:23:21.751772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 07:23:21.751803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 07:23:21.751978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 07:23:21.752021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 07:23:21.752158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 07:23:21.752190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 07:23:21.752385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 07:23:21.752416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 07:23:21.752535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 07:23:21.752565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 07:23:21.752745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 07:23:21.752775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 07:23:21.752968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 07:23:21.753001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 07:23:21.753189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 07:23:21.753220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 07:23:21.753524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 07:23:21.753555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 07:23:21.753764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 07:23:21.753795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 07:23:21.753899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 07:23:21.753930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 07:23:21.754192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 07:23:21.754223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 07:23:21.754482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 07:23:21.754513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 07:23:21.754789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 07:23:21.754819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 07:23:21.755006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 07:23:21.755039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 07:23:21.755317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 07:23:21.755349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 07:23:21.755535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 07:23:21.755567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 07:23:21.755829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 07:23:21.755861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 07:23:21.756046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 07:23:21.756080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 07:23:21.756322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 07:23:21.756352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 07:23:21.756469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 07:23:21.756500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.389 [2024-11-20 07:23:21.756675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.389 [2024-11-20 07:23:21.756705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.389 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 07:23:21.756838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 07:23:21.756870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 07:23:21.757001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 07:23:21.757033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 07:23:21.757207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 07:23:21.757238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 07:23:21.757432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 07:23:21.757463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 07:23:21.757649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 07:23:21.757680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 07:23:21.757955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 07:23:21.757988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 07:23:21.758186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 07:23:21.758216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 07:23:21.758454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 07:23:21.758485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 07:23:21.758655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 07:23:21.758686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 07:23:21.758880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 07:23:21.758911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 07:23:21.759107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 07:23:21.759139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 07:23:21.759262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 07:23:21.759293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 07:23:21.759418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 07:23:21.759449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 07:23:21.759634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 07:23:21.759664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 07:23:21.759877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 07:23:21.759907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 07:23:21.760127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 07:23:21.760159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 07:23:21.760347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 07:23:21.760378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 07:23:21.760556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 07:23:21.760587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 07:23:21.760698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 07:23:21.760729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 07:23:21.760849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 07:23:21.760886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 07:23:21.761157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 07:23:21.761190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 07:23:21.761361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 07:23:21.761392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 07:23:21.761588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 07:23:21.761619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 07:23:21.761830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 07:23:21.761860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 07:23:21.762069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 07:23:21.762102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 07:23:21.762208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 07:23:21.762239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 07:23:21.762359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 07:23:21.762389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 07:23:21.762582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 07:23:21.762613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 07:23:21.762790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 07:23:21.762822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 07:23:21.762993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 07:23:21.763025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 07:23:21.763142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 07:23:21.763173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.390 qpair failed and we were unable to recover it. 00:27:17.390 [2024-11-20 07:23:21.763345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.390 [2024-11-20 07:23:21.763375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 07:23:21.763635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 07:23:21.763666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 07:23:21.763934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 07:23:21.763974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 07:23:21.764159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 07:23:21.764190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 07:23:21.764450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 07:23:21.764481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 07:23:21.764595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 07:23:21.764626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 07:23:21.764728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 07:23:21.764759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 07:23:21.764985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 07:23:21.765017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 07:23:21.765158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 07:23:21.765189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 07:23:21.765318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 07:23:21.765349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 07:23:21.765589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 07:23:21.765619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 07:23:21.765804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 07:23:21.765835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 07:23:21.766015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 07:23:21.766048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 07:23:21.766313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 07:23:21.766344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 07:23:21.766482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 07:23:21.766514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 07:23:21.766637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 07:23:21.766669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 07:23:21.766905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 07:23:21.766936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 07:23:21.767117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 07:23:21.767148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 07:23:21.767263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 07:23:21.767294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 07:23:21.767495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 07:23:21.767527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 07:23:21.767732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 07:23:21.767762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 07:23:21.768028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 07:23:21.768060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 07:23:21.768177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 07:23:21.768208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 07:23:21.768344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 07:23:21.768375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 07:23:21.768544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 07:23:21.768575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 07:23:21.768814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 07:23:21.768845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 07:23:21.769036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 07:23:21.769069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 07:23:21.769250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 07:23:21.769280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 07:23:21.769479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 07:23:21.769516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 07:23:21.769703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 07:23:21.769735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 07:23:21.769914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 07:23:21.769945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 07:23:21.770201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.391 [2024-11-20 07:23:21.770232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.391 qpair failed and we were unable to recover it. 00:27:17.391 [2024-11-20 07:23:21.770414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 07:23:21.770444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 07:23:21.770627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 07:23:21.770658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 07:23:21.770906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 07:23:21.770937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 07:23:21.771164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 07:23:21.771195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 07:23:21.771459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 07:23:21.771491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 07:23:21.771691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 07:23:21.771723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 07:23:21.771928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 07:23:21.771971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 07:23:21.772100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 07:23:21.772131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 07:23:21.772256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 07:23:21.772286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 07:23:21.772548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 07:23:21.772579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 07:23:21.772822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 07:23:21.772853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 07:23:21.773049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 07:23:21.773082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 07:23:21.773342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 07:23:21.773373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 07:23:21.773541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 07:23:21.773572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 07:23:21.773765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 07:23:21.773797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 07:23:21.773931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 07:23:21.773977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 07:23:21.774237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 07:23:21.774267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 07:23:21.774382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 07:23:21.774414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 07:23:21.774584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 07:23:21.774614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 07:23:21.774861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 07:23:21.774892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 07:23:21.775107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 07:23:21.775139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 07:23:21.775273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 07:23:21.775304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 07:23:21.775496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 07:23:21.775527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 07:23:21.775704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 07:23:21.775736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 07:23:21.775999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 07:23:21.776032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 07:23:21.776170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 07:23:21.776202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 07:23:21.776374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 07:23:21.776405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 07:23:21.776596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.392 [2024-11-20 07:23:21.776627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.392 qpair failed and we were unable to recover it. 00:27:17.392 [2024-11-20 07:23:21.776831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 07:23:21.776862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 07:23:21.777098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 07:23:21.777130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 07:23:21.777265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 07:23:21.777295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 07:23:21.777530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 07:23:21.777560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 07:23:21.777679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 07:23:21.777710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 07:23:21.777841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 07:23:21.777871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 07:23:21.778057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 07:23:21.778089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 07:23:21.778230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 07:23:21.778260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 07:23:21.778431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 07:23:21.778467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 07:23:21.778637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 07:23:21.778669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 07:23:21.778875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 07:23:21.778906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 07:23:21.779125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 07:23:21.779157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 07:23:21.779337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 07:23:21.779367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 07:23:21.779607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 07:23:21.779638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 07:23:21.779757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 07:23:21.779788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 07:23:21.779911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 07:23:21.779941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 07:23:21.780069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 07:23:21.780101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 07:23:21.780361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 07:23:21.780391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 07:23:21.780564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 07:23:21.780595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 07:23:21.780764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 07:23:21.780795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 07:23:21.780968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 07:23:21.781001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 07:23:21.781111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 07:23:21.781141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 07:23:21.781256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 07:23:21.781288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 07:23:21.781388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 07:23:21.781418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 07:23:21.781588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 07:23:21.781620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 07:23:21.781881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 07:23:21.781911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 07:23:21.782069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 07:23:21.782102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 07:23:21.782229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 07:23:21.782260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 07:23:21.782430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 07:23:21.782460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 07:23:21.782647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 07:23:21.782678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 07:23:21.782860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 07:23:21.782891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.393 qpair failed and we were unable to recover it. 00:27:17.393 [2024-11-20 07:23:21.783158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.393 [2024-11-20 07:23:21.783190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 07:23:21.783454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 07:23:21.783485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 07:23:21.783733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 07:23:21.783764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 07:23:21.783976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 07:23:21.784008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 07:23:21.784193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 07:23:21.784224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 07:23:21.784407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 07:23:21.784438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 07:23:21.784608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 07:23:21.784639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 07:23:21.784820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 07:23:21.784851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 07:23:21.784988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 07:23:21.785020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 07:23:21.785261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 07:23:21.785292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 07:23:21.785472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 07:23:21.785504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 07:23:21.785766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 07:23:21.785796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 07:23:21.785996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 07:23:21.786029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 07:23:21.786138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 07:23:21.786169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 07:23:21.786418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 07:23:21.786449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 07:23:21.786626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 07:23:21.786657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 07:23:21.786921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 07:23:21.786961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 07:23:21.787082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 07:23:21.787119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 07:23:21.787308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 07:23:21.787338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 07:23:21.787443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 07:23:21.787476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 07:23:21.787614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 07:23:21.787645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 07:23:21.787833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 07:23:21.787864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 07:23:21.788051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 07:23:21.788085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 07:23:21.788208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 07:23:21.788240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 07:23:21.788414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 07:23:21.788446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 07:23:21.788700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 07:23:21.788731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 07:23:21.788968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 07:23:21.789001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 07:23:21.789124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 07:23:21.789156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 07:23:21.789262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 07:23:21.789294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 07:23:21.789479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 07:23:21.789510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 07:23:21.789770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 07:23:21.789801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 07:23:21.789978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.394 [2024-11-20 07:23:21.790011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.394 qpair failed and we were unable to recover it. 00:27:17.394 [2024-11-20 07:23:21.790139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 07:23:21.790171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 07:23:21.790435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 07:23:21.790466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 07:23:21.790669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 07:23:21.790700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 07:23:21.790965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 07:23:21.790999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 07:23:21.791121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 07:23:21.791152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 07:23:21.791264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 07:23:21.791294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 07:23:21.791482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 07:23:21.791513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 07:23:21.791697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 07:23:21.791728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 07:23:21.791913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 07:23:21.791945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 07:23:21.792149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 07:23:21.792182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 07:23:21.792388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 07:23:21.792420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 07:23:21.792589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 07:23:21.792620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 07:23:21.792820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 07:23:21.792852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 07:23:21.793094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 07:23:21.793127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 07:23:21.793331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 07:23:21.793362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 07:23:21.793600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 07:23:21.793631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 07:23:21.793803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 07:23:21.793835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 07:23:21.793961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 07:23:21.793993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 07:23:21.794181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 07:23:21.794213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 07:23:21.794397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 07:23:21.794429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 07:23:21.794559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 07:23:21.794590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 07:23:21.794787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 07:23:21.794818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 07:23:21.794936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 07:23:21.794978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 07:23:21.795188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 07:23:21.795219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 07:23:21.795395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 07:23:21.795427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 07:23:21.795614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 07:23:21.795646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 07:23:21.795862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 07:23:21.795893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 07:23:21.796021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 07:23:21.796053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 07:23:21.796238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 07:23:21.796270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 07:23:21.796440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 07:23:21.796471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 07:23:21.796640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 07:23:21.796671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.395 [2024-11-20 07:23:21.796963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.395 [2024-11-20 07:23:21.796995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.395 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 07:23:21.797262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 07:23:21.797294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 07:23:21.797484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 07:23:21.797515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 07:23:21.797633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 07:23:21.797664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 07:23:21.797784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 07:23:21.797816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 07:23:21.798028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 07:23:21.798061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 07:23:21.798346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 07:23:21.798377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 07:23:21.798557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 07:23:21.798589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 07:23:21.798867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 07:23:21.798899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 07:23:21.799084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 07:23:21.799116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 07:23:21.799288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 07:23:21.799319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 07:23:21.799511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 07:23:21.799543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 07:23:21.799797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 07:23:21.799828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 07:23:21.800107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 07:23:21.800139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 07:23:21.800306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 07:23:21.800338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 07:23:21.800593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 07:23:21.800624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 07:23:21.800818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 07:23:21.800849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 07:23:21.801111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 07:23:21.801143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 07:23:21.801427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 07:23:21.801458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 07:23:21.801643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 07:23:21.801675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 07:23:21.801913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 07:23:21.801944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 07:23:21.802160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 07:23:21.802197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 07:23:21.802334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 07:23:21.802365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 07:23:21.802544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 07:23:21.802575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 07:23:21.802695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 07:23:21.802727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 07:23:21.802969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 07:23:21.803003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 07:23:21.803176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 07:23:21.803207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 07:23:21.803447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 07:23:21.803478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 07:23:21.803672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 07:23:21.803717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 07:23:21.804007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 07:23:21.804052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 07:23:21.804337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 07:23:21.804378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 07:23:21.804604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.396 [2024-11-20 07:23:21.804638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.396 qpair failed and we were unable to recover it. 00:27:17.396 [2024-11-20 07:23:21.804827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 07:23:21.804859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 07:23:21.805087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 07:23:21.805121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 07:23:21.805320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 07:23:21.805352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 07:23:21.805596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 07:23:21.805629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 07:23:21.805841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 07:23:21.805874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 07:23:21.806004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 07:23:21.806037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 07:23:21.806318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 07:23:21.806350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 07:23:21.806485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 07:23:21.806517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 07:23:21.806778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 07:23:21.806809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 07:23:21.806921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 07:23:21.806961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 07:23:21.807152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 07:23:21.807184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 07:23:21.807421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 07:23:21.807453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 07:23:21.807576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 07:23:21.807608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 07:23:21.807780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 07:23:21.807812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 07:23:21.807990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 07:23:21.808023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 07:23:21.808228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 07:23:21.808260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 07:23:21.808483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 07:23:21.808516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 07:23:21.808755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 07:23:21.808787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 07:23:21.808915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 07:23:21.808956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 07:23:21.809210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 07:23:21.809242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 07:23:21.809363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 07:23:21.809394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 07:23:21.809567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 07:23:21.809599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 07:23:21.809737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 07:23:21.809769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 07:23:21.809891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 07:23:21.809923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 07:23:21.810156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 07:23:21.810188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 07:23:21.810321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 07:23:21.810353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 07:23:21.810592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 07:23:21.810625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 07:23:21.810794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 07:23:21.810825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 07:23:21.811111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 07:23:21.811145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 07:23:21.811341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 07:23:21.811379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 07:23:21.811641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 07:23:21.811673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 07:23:21.811793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 07:23:21.811825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.397 qpair failed and we were unable to recover it. 00:27:17.397 [2024-11-20 07:23:21.812002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.397 [2024-11-20 07:23:21.812035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 07:23:21.812158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 07:23:21.812190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 07:23:21.812384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 07:23:21.812416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 07:23:21.812601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 07:23:21.812633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 07:23:21.812808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 07:23:21.812840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 07:23:21.813013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 07:23:21.813046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 07:23:21.813301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 07:23:21.813333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 07:23:21.813579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 07:23:21.813611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 07:23:21.813718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 07:23:21.813749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 07:23:21.814017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 07:23:21.814051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 07:23:21.814288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 07:23:21.814320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 07:23:21.814566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 07:23:21.814598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 07:23:21.814793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 07:23:21.814824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 07:23:21.814997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 07:23:21.815031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 07:23:21.815206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 07:23:21.815238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 07:23:21.815414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 07:23:21.815445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 07:23:21.815657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 07:23:21.815690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 07:23:21.815930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 07:23:21.815974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 07:23:21.816222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 07:23:21.816254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 07:23:21.816444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 07:23:21.816476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 07:23:21.816660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 07:23:21.816693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 07:23:21.816866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 07:23:21.816898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 07:23:21.817117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 07:23:21.817150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 07:23:21.817366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 07:23:21.817398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 07:23:21.817520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 07:23:21.817553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 07:23:21.817734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 07:23:21.817766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 07:23:21.818029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 07:23:21.818063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 07:23:21.818170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 07:23:21.818201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 07:23:21.818383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 07:23:21.818415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 07:23:21.818605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 07:23:21.818636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 07:23:21.818772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 07:23:21.818804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 07:23:21.818982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.398 [2024-11-20 07:23:21.819015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.398 qpair failed and we were unable to recover it. 00:27:17.398 [2024-11-20 07:23:21.819204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 07:23:21.819234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 07:23:21.819521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 07:23:21.819553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 07:23:21.819741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 07:23:21.819772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 07:23:21.819884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 07:23:21.819915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 07:23:21.820162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 07:23:21.820196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 07:23:21.820389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 07:23:21.820425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 07:23:21.820649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 07:23:21.820681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 07:23:21.820920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 07:23:21.820957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 07:23:21.821155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 07:23:21.821186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 07:23:21.821380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 07:23:21.821411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 07:23:21.821648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 07:23:21.821680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 07:23:21.821810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 07:23:21.821840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 07:23:21.821987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 07:23:21.822020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 07:23:21.822196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 07:23:21.822229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 07:23:21.822424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 07:23:21.822456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 07:23:21.822723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 07:23:21.822754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 07:23:21.822992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 07:23:21.823026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 07:23:21.823200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 07:23:21.823232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 07:23:21.823471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 07:23:21.823502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 07:23:21.823751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 07:23:21.823784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 07:23:21.823972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 07:23:21.824004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 07:23:21.824187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 07:23:21.824219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 07:23:21.824353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 07:23:21.824384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 07:23:21.824568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 07:23:21.824600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 07:23:21.824714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 07:23:21.824746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 07:23:21.824851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 07:23:21.824881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 07:23:21.825108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 07:23:21.825140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 07:23:21.825272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 07:23:21.825304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 07:23:21.825442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 07:23:21.825474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 07:23:21.825645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 07:23:21.825676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 07:23:21.825863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 07:23:21.825896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 07:23:21.826056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 07:23:21.826089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 07:23:21.826335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 07:23:21.826367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 07:23:21.826563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.399 [2024-11-20 07:23:21.826594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.399 qpair failed and we were unable to recover it. 00:27:17.399 [2024-11-20 07:23:21.826784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 07:23:21.826816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 07:23:21.827013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 07:23:21.827047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 07:23:21.827313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 07:23:21.827343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 07:23:21.827552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 07:23:21.827584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 07:23:21.827827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 07:23:21.827860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 07:23:21.827966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 07:23:21.827998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 07:23:21.828174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 07:23:21.828206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 07:23:21.828404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 07:23:21.828436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 07:23:21.828544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 07:23:21.828576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 07:23:21.828791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 07:23:21.828822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 07:23:21.829072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 07:23:21.829106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 07:23:21.829222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 07:23:21.829259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 07:23:21.829364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 07:23:21.829397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 07:23:21.829591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 07:23:21.829624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 07:23:21.829837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 07:23:21.829869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 07:23:21.830102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 07:23:21.830136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 07:23:21.830263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 07:23:21.830293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 07:23:21.830411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 07:23:21.830444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 07:23:21.830641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 07:23:21.830673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 07:23:21.830861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 07:23:21.830893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 07:23:21.831101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 07:23:21.831134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 07:23:21.831344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 07:23:21.831375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 07:23:21.831625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 07:23:21.831656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 07:23:21.831901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 07:23:21.831932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 07:23:21.832151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 07:23:21.832183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 07:23:21.832449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 07:23:21.832482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 07:23:21.832726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 07:23:21.832757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 07:23:21.832882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 07:23:21.832916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 07:23:21.833099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 07:23:21.833131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 07:23:21.833378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 07:23:21.833410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 07:23:21.833616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 07:23:21.833649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 07:23:21.833833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 07:23:21.833865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 07:23:21.834050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 07:23:21.834083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 07:23:21.834283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.400 [2024-11-20 07:23:21.834314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.400 qpair failed and we were unable to recover it. 00:27:17.400 [2024-11-20 07:23:21.834499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 07:23:21.834531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 07:23:21.834775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 07:23:21.834805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 07:23:21.834999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 07:23:21.835032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 07:23:21.835230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 07:23:21.835262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 07:23:21.835409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 07:23:21.835442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 07:23:21.835648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 07:23:21.835679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 07:23:21.835789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 07:23:21.835821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 07:23:21.836062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 07:23:21.836095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 07:23:21.836271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 07:23:21.836303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 07:23:21.836515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 07:23:21.836547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 07:23:21.836729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 07:23:21.836761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 07:23:21.836881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 07:23:21.836914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 07:23:21.837055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 07:23:21.837089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 07:23:21.837266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 07:23:21.837298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 07:23:21.837472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 07:23:21.837504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 07:23:21.837767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 07:23:21.837799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 07:23:21.837921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 07:23:21.837962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 07:23:21.838102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 07:23:21.838140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 07:23:21.838366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 07:23:21.838398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 07:23:21.838580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 07:23:21.838612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 07:23:21.838748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 07:23:21.838780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 07:23:21.838963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 07:23:21.838996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 07:23:21.839105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 07:23:21.839137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 07:23:21.839328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 07:23:21.839362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 07:23:21.839563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 07:23:21.839594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 07:23:21.839727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 07:23:21.839759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 07:23:21.839935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 07:23:21.839979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 07:23:21.840175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 07:23:21.840207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 07:23:21.840401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 07:23:21.840433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 07:23:21.840622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 07:23:21.840654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 07:23:21.840776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 07:23:21.840808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 07:23:21.841084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 07:23:21.841117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 07:23:21.841288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 07:23:21.841321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 07:23:21.841443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 07:23:21.841475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.401 qpair failed and we were unable to recover it. 00:27:17.401 [2024-11-20 07:23:21.841657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.401 [2024-11-20 07:23:21.841688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 07:23:21.841813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 07:23:21.841846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 07:23:21.842020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 07:23:21.842053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 07:23:21.842244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 07:23:21.842275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 07:23:21.842392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 07:23:21.842423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 07:23:21.842530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 07:23:21.842560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 07:23:21.842701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 07:23:21.842732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 07:23:21.842839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 07:23:21.842870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 07:23:21.843049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 07:23:21.843082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 07:23:21.843188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 07:23:21.843218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 07:23:21.843483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 07:23:21.843515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 07:23:21.843692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 07:23:21.843723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 07:23:21.843904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 07:23:21.843935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 07:23:21.844141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 07:23:21.844174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 07:23:21.844365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 07:23:21.844396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 07:23:21.844532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 07:23:21.844563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 07:23:21.844771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 07:23:21.844803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 07:23:21.844923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 07:23:21.844966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 07:23:21.845158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 07:23:21.845190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 07:23:21.845319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 07:23:21.845350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 07:23:21.845545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 07:23:21.845577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 07:23:21.845711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 07:23:21.845742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 07:23:21.845927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 07:23:21.845983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 07:23:21.846105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 07:23:21.846141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 07:23:21.846383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 07:23:21.846415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 07:23:21.846655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 07:23:21.846687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 07:23:21.846803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 07:23:21.846835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 07:23:21.846970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 07:23:21.847003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 07:23:21.847142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 07:23:21.847173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 07:23:21.847305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 07:23:21.847336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 07:23:21.847535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 07:23:21.847568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 07:23:21.847756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 07:23:21.847787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 07:23:21.847909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 07:23:21.847939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 07:23:21.848065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.402 [2024-11-20 07:23:21.848098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.402 qpair failed and we were unable to recover it. 00:27:17.402 [2024-11-20 07:23:21.848216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 07:23:21.848248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 07:23:21.848375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 07:23:21.848405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 07:23:21.848670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 07:23:21.848702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 07:23:21.848833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 07:23:21.848865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 07:23:21.849038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 07:23:21.849071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 07:23:21.849182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 07:23:21.849212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 07:23:21.849400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 07:23:21.849430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 07:23:21.849557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 07:23:21.849589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 07:23:21.849707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 07:23:21.849738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 07:23:21.850030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 07:23:21.850064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 07:23:21.850271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 07:23:21.850304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 07:23:21.850557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 07:23:21.850589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 07:23:21.850712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 07:23:21.850743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 07:23:21.850851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 07:23:21.850883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 07:23:21.851057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 07:23:21.851089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 07:23:21.851302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 07:23:21.851333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 07:23:21.851462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 07:23:21.851495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 07:23:21.851697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 07:23:21.851728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 07:23:21.851856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 07:23:21.851888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 07:23:21.852020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 07:23:21.852063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 07:23:21.852192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 07:23:21.852223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 07:23:21.852462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 07:23:21.852493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 07:23:21.852683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 07:23:21.852715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 07:23:21.852826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 07:23:21.852858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 07:23:21.852985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 07:23:21.853018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 07:23:21.853227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 07:23:21.853259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 07:23:21.853384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 07:23:21.853415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 07:23:21.853531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 07:23:21.853562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 07:23:21.853734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 07:23:21.853766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 07:23:21.853866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 07:23:21.853903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 07:23:21.854124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 07:23:21.854157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 07:23:21.854343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 07:23:21.854374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.403 [2024-11-20 07:23:21.854490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.403 [2024-11-20 07:23:21.854522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.403 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 07:23:21.854641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 07:23:21.854674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 07:23:21.854784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 07:23:21.854816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 07:23:21.854926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 07:23:21.854966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 07:23:21.855141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 07:23:21.855173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 07:23:21.855294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 07:23:21.855324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 07:23:21.855513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 07:23:21.855545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 07:23:21.855722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 07:23:21.855754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 07:23:21.856031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 07:23:21.856066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 07:23:21.856305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 07:23:21.856337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 07:23:21.856510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 07:23:21.856541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 07:23:21.856719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 07:23:21.856750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 07:23:21.856945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 07:23:21.856987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 07:23:21.857115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 07:23:21.857147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 07:23:21.857272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 07:23:21.857303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 07:23:21.857422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 07:23:21.857453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 07:23:21.857557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 07:23:21.857589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 07:23:21.857765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 07:23:21.857797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 07:23:21.857993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 07:23:21.858026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 07:23:21.858138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 07:23:21.858169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 07:23:21.858277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 07:23:21.858308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 07:23:21.858511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 07:23:21.858543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 07:23:21.858725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 07:23:21.858756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 07:23:21.858871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 07:23:21.858903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 07:23:21.859054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 07:23:21.859088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 07:23:21.859276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 07:23:21.859308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 07:23:21.859416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 07:23:21.859448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 07:23:21.859572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 07:23:21.859604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 07:23:21.859790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 07:23:21.859821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 07:23:21.860027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 07:23:21.860061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 07:23:21.860230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 07:23:21.860263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 07:23:21.860382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 07:23:21.860414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 07:23:21.860523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 07:23:21.860556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 07:23:21.860754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 07:23:21.860787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 07:23:21.861070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 07:23:21.861102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 07:23:21.861296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 07:23:21.861329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 07:23:21.861446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 07:23:21.861477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.404 qpair failed and we were unable to recover it. 00:27:17.404 [2024-11-20 07:23:21.861602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.404 [2024-11-20 07:23:21.861638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 07:23:21.861878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 07:23:21.861909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 07:23:21.862105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 07:23:21.862138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 07:23:21.862341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 07:23:21.862373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 07:23:21.862504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 07:23:21.862536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 07:23:21.862642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 07:23:21.862674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 07:23:21.862781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 07:23:21.862812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 07:23:21.863039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 07:23:21.863072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 07:23:21.863248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 07:23:21.863280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 07:23:21.863466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 07:23:21.863497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 07:23:21.863618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 07:23:21.863650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 07:23:21.863842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 07:23:21.863874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 07:23:21.864115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 07:23:21.864147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 07:23:21.864284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 07:23:21.864316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 07:23:21.864506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 07:23:21.864538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 07:23:21.864726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 07:23:21.864757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 07:23:21.864887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 07:23:21.864918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 07:23:21.865042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 07:23:21.865074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 07:23:21.865180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 07:23:21.865212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 07:23:21.865401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 07:23:21.865433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 07:23:21.865610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 07:23:21.865643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 07:23:21.865818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 07:23:21.865850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 07:23:21.865975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 07:23:21.866008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 07:23:21.866129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 07:23:21.866162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 07:23:21.866280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 07:23:21.866311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 07:23:21.866439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 07:23:21.866472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 07:23:21.866591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 07:23:21.866623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 07:23:21.866787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 07:23:21.866858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 07:23:21.867006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 07:23:21.867045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 07:23:21.867186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 07:23:21.867218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 07:23:21.867341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 07:23:21.867372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 07:23:21.867566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 07:23:21.867597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 07:23:21.867709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 07:23:21.867740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 07:23:21.867883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 07:23:21.867914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 07:23:21.868046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 07:23:21.868079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 07:23:21.868188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 07:23:21.868218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 07:23:21.868333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 07:23:21.868364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 07:23:21.868575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 07:23:21.868606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 07:23:21.868777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 07:23:21.868808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 07:23:21.868918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.405 [2024-11-20 07:23:21.868959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.405 qpair failed and we were unable to recover it. 00:27:17.405 [2024-11-20 07:23:21.869136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 07:23:21.869178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 07:23:21.869351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 07:23:21.869382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 07:23:21.869577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 07:23:21.869609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 07:23:21.869813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 07:23:21.869846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 07:23:21.870030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 07:23:21.870062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 07:23:21.870269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 07:23:21.870300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 07:23:21.870477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 07:23:21.870510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 07:23:21.870632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 07:23:21.870664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 07:23:21.870846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 07:23:21.870877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 07:23:21.871004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 07:23:21.871037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 07:23:21.871225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 07:23:21.871256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 07:23:21.871505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 07:23:21.871536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 07:23:21.871664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 07:23:21.871695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 07:23:21.871800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 07:23:21.871831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 07:23:21.872096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 07:23:21.872130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 07:23:21.872305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 07:23:21.872336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 07:23:21.872543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 07:23:21.872574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 07:23:21.872754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 07:23:21.872786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 07:23:21.872907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 07:23:21.872939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 07:23:21.873116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 07:23:21.873147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 07:23:21.873279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 07:23:21.873310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 07:23:21.873491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 07:23:21.873521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 07:23:21.873708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 07:23:21.873739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 07:23:21.873925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 07:23:21.873963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 07:23:21.874083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 07:23:21.874114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 07:23:21.874303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 07:23:21.874334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 07:23:21.874521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 07:23:21.874553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 07:23:21.874676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 07:23:21.874708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 07:23:21.874891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 07:23:21.874933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 07:23:21.875096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 07:23:21.875141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 07:23:21.875339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 07:23:21.875374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 07:23:21.875560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 07:23:21.875593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 07:23:21.875713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 07:23:21.875747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 07:23:21.875876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 07:23:21.875922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 07:23:21.876058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 07:23:21.876091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 07:23:21.876266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 07:23:21.876299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 07:23:21.876565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 07:23:21.876598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 07:23:21.876718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 07:23:21.876751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.406 [2024-11-20 07:23:21.876925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.406 [2024-11-20 07:23:21.876972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.406 qpair failed and we were unable to recover it. 00:27:17.407 [2024-11-20 07:23:21.877167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.407 [2024-11-20 07:23:21.877204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.407 qpair failed and we were unable to recover it. 00:27:17.407 [2024-11-20 07:23:21.877409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.407 [2024-11-20 07:23:21.877464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.407 qpair failed and we were unable to recover it. 00:27:17.407 [2024-11-20 07:23:21.877724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.407 [2024-11-20 07:23:21.877758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.407 qpair failed and we were unable to recover it. 00:27:17.407 [2024-11-20 07:23:21.877888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.407 [2024-11-20 07:23:21.877919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.407 qpair failed and we were unable to recover it. 00:27:17.407 [2024-11-20 07:23:21.878052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.407 [2024-11-20 07:23:21.878098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.407 qpair failed and we were unable to recover it. 00:27:17.407 [2024-11-20 07:23:21.878325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.407 [2024-11-20 07:23:21.878357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.407 qpair failed and we were unable to recover it. 00:27:17.407 [2024-11-20 07:23:21.878547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.407 [2024-11-20 07:23:21.878579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.407 qpair failed and we were unable to recover it. 00:27:17.407 [2024-11-20 07:23:21.878762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.407 [2024-11-20 07:23:21.878794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.407 qpair failed and we were unable to recover it. 00:27:17.407 [2024-11-20 07:23:21.878929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.407 [2024-11-20 07:23:21.878974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.407 qpair failed and we were unable to recover it. 00:27:17.407 [2024-11-20 07:23:21.879159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.407 [2024-11-20 07:23:21.879190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.407 qpair failed and we were unable to recover it. 00:27:17.407 [2024-11-20 07:23:21.879312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.407 [2024-11-20 07:23:21.879348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.407 qpair failed and we were unable to recover it. 00:27:17.407 [2024-11-20 07:23:21.879666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.407 [2024-11-20 07:23:21.879703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.407 qpair failed and we were unable to recover it. 00:27:17.407 [2024-11-20 07:23:21.879830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.407 [2024-11-20 07:23:21.879862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.407 qpair failed and we were unable to recover it. 00:27:17.407 [2024-11-20 07:23:21.880047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.407 [2024-11-20 07:23:21.880082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.693 qpair failed and we were unable to recover it. 00:27:17.693 [2024-11-20 07:23:21.880212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.693 [2024-11-20 07:23:21.880244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.693 qpair failed and we were unable to recover it. 00:27:17.693 [2024-11-20 07:23:21.880444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.693 [2024-11-20 07:23:21.880476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.693 qpair failed and we were unable to recover it. 00:27:17.693 [2024-11-20 07:23:21.880585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.693 [2024-11-20 07:23:21.880616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.693 qpair failed and we were unable to recover it. 00:27:17.693 [2024-11-20 07:23:21.880741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.693 [2024-11-20 07:23:21.880772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.693 qpair failed and we were unable to recover it. 00:27:17.693 [2024-11-20 07:23:21.880892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.693 [2024-11-20 07:23:21.880924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.693 qpair failed and we were unable to recover it. 00:27:17.693 [2024-11-20 07:23:21.881154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.693 [2024-11-20 07:23:21.881203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.693 qpair failed and we were unable to recover it. 00:27:17.693 [2024-11-20 07:23:21.881346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.693 [2024-11-20 07:23:21.881389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.693 qpair failed and we were unable to recover it. 00:27:17.693 [2024-11-20 07:23:21.881550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.693 [2024-11-20 07:23:21.881598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.693 qpair failed and we were unable to recover it. 00:27:17.693 [2024-11-20 07:23:21.881811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.693 [2024-11-20 07:23:21.881853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.693 qpair failed and we were unable to recover it. 00:27:17.693 [2024-11-20 07:23:21.882075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.693 [2024-11-20 07:23:21.882120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.693 qpair failed and we were unable to recover it. 00:27:17.693 [2024-11-20 07:23:21.882282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.693 [2024-11-20 07:23:21.882329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.693 qpair failed and we were unable to recover it. 00:27:17.693 [2024-11-20 07:23:21.882488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.693 [2024-11-20 07:23:21.882530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.693 qpair failed and we were unable to recover it. 00:27:17.693 [2024-11-20 07:23:21.882742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.693 [2024-11-20 07:23:21.882787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.693 qpair failed and we were unable to recover it. 00:27:17.693 [2024-11-20 07:23:21.882926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.693 [2024-11-20 07:23:21.882993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.693 qpair failed and we were unable to recover it. 00:27:17.694 [2024-11-20 07:23:21.883248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.694 [2024-11-20 07:23:21.883323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.694 qpair failed and we were unable to recover it. 00:27:17.694 [2024-11-20 07:23:21.883479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.694 [2024-11-20 07:23:21.883515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.694 qpair failed and we were unable to recover it. 00:27:17.694 [2024-11-20 07:23:21.883637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.694 [2024-11-20 07:23:21.883673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.694 qpair failed and we were unable to recover it. 00:27:17.694 [2024-11-20 07:23:21.883799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.694 [2024-11-20 07:23:21.883831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.694 qpair failed and we were unable to recover it. 00:27:17.694 [2024-11-20 07:23:21.884107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.694 [2024-11-20 07:23:21.884146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.694 qpair failed and we were unable to recover it. 00:27:17.694 [2024-11-20 07:23:21.884401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.694 [2024-11-20 07:23:21.884438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.694 qpair failed and we were unable to recover it. 00:27:17.694 [2024-11-20 07:23:21.884569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.694 [2024-11-20 07:23:21.884605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.694 qpair failed and we were unable to recover it. 00:27:17.694 [2024-11-20 07:23:21.884742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.694 [2024-11-20 07:23:21.884775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.694 qpair failed and we were unable to recover it. 00:27:17.694 [2024-11-20 07:23:21.884899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.694 [2024-11-20 07:23:21.884934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.694 qpair failed and we were unable to recover it. 00:27:17.694 [2024-11-20 07:23:21.885142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.694 [2024-11-20 07:23:21.885174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.694 qpair failed and we were unable to recover it. 00:27:17.694 [2024-11-20 07:23:21.885446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.694 [2024-11-20 07:23:21.885481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.694 qpair failed and we were unable to recover it. 00:27:17.694 [2024-11-20 07:23:21.885612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.694 [2024-11-20 07:23:21.885644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.694 qpair failed and we were unable to recover it. 00:27:17.694 [2024-11-20 07:23:21.885822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.694 [2024-11-20 07:23:21.885857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.694 qpair failed and we were unable to recover it. 00:27:17.694 [2024-11-20 07:23:21.886061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.694 [2024-11-20 07:23:21.886095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.694 qpair failed and we were unable to recover it. 00:27:17.694 [2024-11-20 07:23:21.886306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.694 [2024-11-20 07:23:21.886340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.694 qpair failed and we were unable to recover it. 00:27:17.694 [2024-11-20 07:23:21.886459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.694 [2024-11-20 07:23:21.886493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.694 qpair failed and we were unable to recover it. 00:27:17.694 [2024-11-20 07:23:21.886614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.694 [2024-11-20 07:23:21.886646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.694 qpair failed and we were unable to recover it. 00:27:17.694 [2024-11-20 07:23:21.886832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.694 [2024-11-20 07:23:21.886864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.694 qpair failed and we were unable to recover it. 00:27:17.694 [2024-11-20 07:23:21.887051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.694 [2024-11-20 07:23:21.887085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.694 qpair failed and we were unable to recover it. 00:27:17.694 [2024-11-20 07:23:21.887199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.694 [2024-11-20 07:23:21.887231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.694 qpair failed and we were unable to recover it. 00:27:17.694 [2024-11-20 07:23:21.887358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.694 [2024-11-20 07:23:21.887388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.694 qpair failed and we were unable to recover it. 00:27:17.694 [2024-11-20 07:23:21.887508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.694 [2024-11-20 07:23:21.887540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.694 qpair failed and we were unable to recover it. 00:27:17.694 [2024-11-20 07:23:21.887666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.694 [2024-11-20 07:23:21.887697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.694 qpair failed and we were unable to recover it. 00:27:17.694 [2024-11-20 07:23:21.887813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.694 [2024-11-20 07:23:21.887844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.694 qpair failed and we were unable to recover it. 00:27:17.694 [2024-11-20 07:23:21.887962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.694 [2024-11-20 07:23:21.887995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.694 qpair failed and we were unable to recover it. 00:27:17.694 [2024-11-20 07:23:21.888237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.694 [2024-11-20 07:23:21.888269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.694 qpair failed and we were unable to recover it. 00:27:17.694 [2024-11-20 07:23:21.888451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.694 [2024-11-20 07:23:21.888483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.694 qpair failed and we were unable to recover it. 00:27:17.694 [2024-11-20 07:23:21.888674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.694 [2024-11-20 07:23:21.888714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.694 qpair failed and we were unable to recover it. 00:27:17.694 [2024-11-20 07:23:21.888899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.695 [2024-11-20 07:23:21.888931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.695 qpair failed and we were unable to recover it. 00:27:17.695 [2024-11-20 07:23:21.889179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.695 [2024-11-20 07:23:21.889211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.695 qpair failed and we were unable to recover it. 00:27:17.695 [2024-11-20 07:23:21.889344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.695 [2024-11-20 07:23:21.889377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.695 qpair failed and we were unable to recover it. 00:27:17.695 [2024-11-20 07:23:21.889557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.695 [2024-11-20 07:23:21.889590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.695 qpair failed and we were unable to recover it. 00:27:17.695 [2024-11-20 07:23:21.889771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.695 [2024-11-20 07:23:21.889802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.695 qpair failed and we were unable to recover it. 00:27:17.695 [2024-11-20 07:23:21.889924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.695 [2024-11-20 07:23:21.889968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.695 qpair failed and we were unable to recover it. 00:27:17.695 [2024-11-20 07:23:21.890088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.695 [2024-11-20 07:23:21.890120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.695 qpair failed and we were unable to recover it. 00:27:17.695 [2024-11-20 07:23:21.890237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.695 [2024-11-20 07:23:21.890268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.695 qpair failed and we were unable to recover it. 00:27:17.695 [2024-11-20 07:23:21.890459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.695 [2024-11-20 07:23:21.890491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.695 qpair failed and we were unable to recover it. 00:27:17.695 [2024-11-20 07:23:21.890696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.695 [2024-11-20 07:23:21.890728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.695 qpair failed and we were unable to recover it. 00:27:17.695 [2024-11-20 07:23:21.890919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.695 [2024-11-20 07:23:21.890972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.695 qpair failed and we were unable to recover it. 00:27:17.695 [2024-11-20 07:23:21.891155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.695 [2024-11-20 07:23:21.891186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.695 qpair failed and we were unable to recover it. 00:27:17.695 [2024-11-20 07:23:21.891454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.695 [2024-11-20 07:23:21.891485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.695 qpair failed and we were unable to recover it. 00:27:17.695 [2024-11-20 07:23:21.891608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.695 [2024-11-20 07:23:21.891640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.695 qpair failed and we were unable to recover it. 00:27:17.695 [2024-11-20 07:23:21.891776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.695 [2024-11-20 07:23:21.891807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.695 qpair failed and we were unable to recover it. 00:27:17.695 [2024-11-20 07:23:21.891915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.695 [2024-11-20 07:23:21.891946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.695 qpair failed and we were unable to recover it. 00:27:17.695 [2024-11-20 07:23:21.892105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.695 [2024-11-20 07:23:21.892137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.695 qpair failed and we were unable to recover it. 00:27:17.695 [2024-11-20 07:23:21.892329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.695 [2024-11-20 07:23:21.892360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.695 qpair failed and we were unable to recover it. 00:27:17.695 [2024-11-20 07:23:21.892544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.695 [2024-11-20 07:23:21.892574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.695 qpair failed and we were unable to recover it. 00:27:17.695 [2024-11-20 07:23:21.892758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.695 [2024-11-20 07:23:21.892789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.695 qpair failed and we were unable to recover it. 00:27:17.695 [2024-11-20 07:23:21.892907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.695 [2024-11-20 07:23:21.892938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.695 qpair failed and we were unable to recover it. 00:27:17.695 [2024-11-20 07:23:21.893071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.695 [2024-11-20 07:23:21.893103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.695 qpair failed and we were unable to recover it. 00:27:17.695 [2024-11-20 07:23:21.893247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.695 [2024-11-20 07:23:21.893278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.695 qpair failed and we were unable to recover it. 00:27:17.695 [2024-11-20 07:23:21.893416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.695 [2024-11-20 07:23:21.893449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.695 qpair failed and we were unable to recover it. 00:27:17.695 [2024-11-20 07:23:21.893710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.695 [2024-11-20 07:23:21.893781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.695 qpair failed and we were unable to recover it. 00:27:17.695 [2024-11-20 07:23:21.893986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.695 [2024-11-20 07:23:21.894024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.695 qpair failed and we were unable to recover it. 00:27:17.695 [2024-11-20 07:23:21.894143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.695 [2024-11-20 07:23:21.894187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.695 qpair failed and we were unable to recover it. 00:27:17.695 [2024-11-20 07:23:21.894303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.695 [2024-11-20 07:23:21.894335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.695 qpair failed and we were unable to recover it. 00:27:17.695 [2024-11-20 07:23:21.894511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.695 [2024-11-20 07:23:21.894543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.695 qpair failed and we were unable to recover it. 00:27:17.695 [2024-11-20 07:23:21.894715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.695 [2024-11-20 07:23:21.894746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.696 qpair failed and we were unable to recover it. 00:27:17.696 [2024-11-20 07:23:21.895032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.696 [2024-11-20 07:23:21.895071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.696 qpair failed and we were unable to recover it. 00:27:17.696 [2024-11-20 07:23:21.895263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.696 [2024-11-20 07:23:21.895295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.696 qpair failed and we were unable to recover it. 00:27:17.696 [2024-11-20 07:23:21.895421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.696 [2024-11-20 07:23:21.895453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.696 qpair failed and we were unable to recover it. 00:27:17.696 [2024-11-20 07:23:21.895626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.696 [2024-11-20 07:23:21.895658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.696 qpair failed and we were unable to recover it. 00:27:17.696 [2024-11-20 07:23:21.895837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.696 [2024-11-20 07:23:21.895869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.696 qpair failed and we were unable to recover it. 00:27:17.696 [2024-11-20 07:23:21.896126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.696 [2024-11-20 07:23:21.896160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.696 qpair failed and we were unable to recover it. 00:27:17.696 [2024-11-20 07:23:21.896347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.696 [2024-11-20 07:23:21.896379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.696 qpair failed and we were unable to recover it. 00:27:17.696 [2024-11-20 07:23:21.896554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.696 [2024-11-20 07:23:21.896586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.696 qpair failed and we were unable to recover it. 00:27:17.696 [2024-11-20 07:23:21.896768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.696 [2024-11-20 07:23:21.896799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.696 qpair failed and we were unable to recover it. 00:27:17.696 [2024-11-20 07:23:21.897005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.696 [2024-11-20 07:23:21.897039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.696 qpair failed and we were unable to recover it. 00:27:17.696 [2024-11-20 07:23:21.897208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.696 [2024-11-20 07:23:21.897240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.696 qpair failed and we were unable to recover it. 00:27:17.696 [2024-11-20 07:23:21.897416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.696 [2024-11-20 07:23:21.897448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.696 qpair failed and we were unable to recover it. 00:27:17.696 [2024-11-20 07:23:21.897564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.696 [2024-11-20 07:23:21.897596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.696 qpair failed and we were unable to recover it. 00:27:17.696 [2024-11-20 07:23:21.897779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.696 [2024-11-20 07:23:21.897810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.696 qpair failed and we were unable to recover it. 00:27:17.696 [2024-11-20 07:23:21.897930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.696 [2024-11-20 07:23:21.897971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.696 qpair failed and we were unable to recover it. 00:27:17.696 [2024-11-20 07:23:21.898168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.696 [2024-11-20 07:23:21.898200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.696 qpair failed and we were unable to recover it. 00:27:17.696 [2024-11-20 07:23:21.900089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.696 [2024-11-20 07:23:21.900146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.696 qpair failed and we were unable to recover it. 00:27:17.696 [2024-11-20 07:23:21.900447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.696 [2024-11-20 07:23:21.900481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.696 qpair failed and we were unable to recover it. 00:27:17.696 [2024-11-20 07:23:21.900663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.696 [2024-11-20 07:23:21.900696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.696 qpair failed and we were unable to recover it. 00:27:17.696 [2024-11-20 07:23:21.900873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.696 [2024-11-20 07:23:21.900905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.696 qpair failed and we were unable to recover it. 00:27:17.696 [2024-11-20 07:23:21.901112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.696 [2024-11-20 07:23:21.901144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.696 qpair failed and we were unable to recover it. 00:27:17.696 [2024-11-20 07:23:21.901406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.696 [2024-11-20 07:23:21.901438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.696 qpair failed and we were unable to recover it. 00:27:17.696 [2024-11-20 07:23:21.901685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.696 [2024-11-20 07:23:21.901716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.696 qpair failed and we were unable to recover it. 00:27:17.696 [2024-11-20 07:23:21.901916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.696 [2024-11-20 07:23:21.901957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.696 qpair failed and we were unable to recover it. 00:27:17.696 [2024-11-20 07:23:21.902065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.696 [2024-11-20 07:23:21.902096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.696 qpair failed and we were unable to recover it. 00:27:17.696 [2024-11-20 07:23:21.902208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.696 [2024-11-20 07:23:21.902239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.696 qpair failed and we were unable to recover it. 00:27:17.696 [2024-11-20 07:23:21.902432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.696 [2024-11-20 07:23:21.902464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.696 qpair failed and we were unable to recover it. 00:27:17.696 [2024-11-20 07:23:21.902647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.696 [2024-11-20 07:23:21.902678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.696 qpair failed and we were unable to recover it. 00:27:17.696 [2024-11-20 07:23:21.902880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.697 [2024-11-20 07:23:21.902911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.697 qpair failed and we were unable to recover it. 00:27:17.697 [2024-11-20 07:23:21.903094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.697 [2024-11-20 07:23:21.903127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.697 qpair failed and we were unable to recover it. 00:27:17.697 [2024-11-20 07:23:21.903319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.697 [2024-11-20 07:23:21.903350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.697 qpair failed and we were unable to recover it. 00:27:17.697 [2024-11-20 07:23:21.903592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.697 [2024-11-20 07:23:21.903623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.697 qpair failed and we were unable to recover it. 00:27:17.697 [2024-11-20 07:23:21.903805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.697 [2024-11-20 07:23:21.903836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.697 qpair failed and we were unable to recover it. 00:27:17.697 [2024-11-20 07:23:21.903957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.697 [2024-11-20 07:23:21.903990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.697 qpair failed and we were unable to recover it. 00:27:17.697 [2024-11-20 07:23:21.904196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.697 [2024-11-20 07:23:21.904226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.697 qpair failed and we were unable to recover it. 00:27:17.697 [2024-11-20 07:23:21.904409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.697 [2024-11-20 07:23:21.904441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.697 qpair failed and we were unable to recover it. 00:27:17.697 [2024-11-20 07:23:21.904614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.697 [2024-11-20 07:23:21.904652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.697 qpair failed and we were unable to recover it. 00:27:17.697 [2024-11-20 07:23:21.904845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.697 [2024-11-20 07:23:21.904876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.697 qpair failed and we were unable to recover it. 00:27:17.697 [2024-11-20 07:23:21.905047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.697 [2024-11-20 07:23:21.905081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.697 qpair failed and we were unable to recover it. 00:27:17.697 [2024-11-20 07:23:21.905261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.697 [2024-11-20 07:23:21.905293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.697 qpair failed and we were unable to recover it. 00:27:17.697 [2024-11-20 07:23:21.905494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.697 [2024-11-20 07:23:21.905525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.697 qpair failed and we were unable to recover it. 00:27:17.697 [2024-11-20 07:23:21.905716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.697 [2024-11-20 07:23:21.905748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.697 qpair failed and we were unable to recover it. 00:27:17.697 [2024-11-20 07:23:21.905877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.697 [2024-11-20 07:23:21.905909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.697 qpair failed and we were unable to recover it. 00:27:17.697 [2024-11-20 07:23:21.906048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.697 [2024-11-20 07:23:21.906081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.697 qpair failed and we were unable to recover it. 00:27:17.697 [2024-11-20 07:23:21.906200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.697 [2024-11-20 07:23:21.906231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.697 qpair failed and we were unable to recover it. 00:27:17.697 [2024-11-20 07:23:21.906366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.697 [2024-11-20 07:23:21.906399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.697 qpair failed and we were unable to recover it. 00:27:17.697 [2024-11-20 07:23:21.906504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.697 [2024-11-20 07:23:21.906536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.697 qpair failed and we were unable to recover it. 00:27:17.697 [2024-11-20 07:23:21.906730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.697 [2024-11-20 07:23:21.906761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.697 qpair failed and we were unable to recover it. 00:27:17.697 [2024-11-20 07:23:21.906889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.697 [2024-11-20 07:23:21.906921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.697 qpair failed and we were unable to recover it. 00:27:17.697 [2024-11-20 07:23:21.907040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.697 [2024-11-20 07:23:21.907071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.697 qpair failed and we were unable to recover it. 00:27:17.697 [2024-11-20 07:23:21.907337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.697 [2024-11-20 07:23:21.907369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.697 qpair failed and we were unable to recover it. 00:27:17.697 [2024-11-20 07:23:21.907497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.697 [2024-11-20 07:23:21.907528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.697 qpair failed and we were unable to recover it. 00:27:17.697 [2024-11-20 07:23:21.907636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.697 [2024-11-20 07:23:21.907667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.697 qpair failed and we were unable to recover it. 00:27:17.697 [2024-11-20 07:23:21.907849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.697 [2024-11-20 07:23:21.907881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.697 qpair failed and we were unable to recover it. 00:27:17.697 [2024-11-20 07:23:21.907989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.697 [2024-11-20 07:23:21.908022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.697 qpair failed and we were unable to recover it. 00:27:17.697 [2024-11-20 07:23:21.908131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.697 [2024-11-20 07:23:21.908162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.697 qpair failed and we were unable to recover it. 00:27:17.697 [2024-11-20 07:23:21.908345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.697 [2024-11-20 07:23:21.908376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.697 qpair failed and we were unable to recover it. 00:27:17.697 [2024-11-20 07:23:21.908642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.697 [2024-11-20 07:23:21.908672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.697 qpair failed and we were unable to recover it. 00:27:17.697 [2024-11-20 07:23:21.908912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.697 [2024-11-20 07:23:21.908944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.698 qpair failed and we were unable to recover it. 00:27:17.698 [2024-11-20 07:23:21.909137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.698 [2024-11-20 07:23:21.909169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.698 qpair failed and we were unable to recover it. 00:27:17.698 [2024-11-20 07:23:21.909304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.698 [2024-11-20 07:23:21.909336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.698 qpair failed and we were unable to recover it. 00:27:17.698 [2024-11-20 07:23:21.909451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.698 [2024-11-20 07:23:21.909483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.698 qpair failed and we were unable to recover it. 00:27:17.698 [2024-11-20 07:23:21.909612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.698 [2024-11-20 07:23:21.909643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.698 qpair failed and we were unable to recover it. 00:27:17.698 [2024-11-20 07:23:21.909812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.698 [2024-11-20 07:23:21.909882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.698 qpair failed and we were unable to recover it. 00:27:17.698 [2024-11-20 07:23:21.910041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.698 [2024-11-20 07:23:21.910077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.698 qpair failed and we were unable to recover it. 00:27:17.698 [2024-11-20 07:23:21.910352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.698 [2024-11-20 07:23:21.910386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.698 qpair failed and we were unable to recover it. 00:27:17.698 [2024-11-20 07:23:21.910567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.698 [2024-11-20 07:23:21.910599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.698 qpair failed and we were unable to recover it. 00:27:17.698 [2024-11-20 07:23:21.910846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.698 [2024-11-20 07:23:21.910877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.698 qpair failed and we were unable to recover it. 00:27:17.698 [2024-11-20 07:23:21.911003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.698 [2024-11-20 07:23:21.911037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.698 qpair failed and we were unable to recover it. 00:27:17.698 [2024-11-20 07:23:21.911210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.698 [2024-11-20 07:23:21.911241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.698 qpair failed and we were unable to recover it. 00:27:17.698 [2024-11-20 07:23:21.911367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.698 [2024-11-20 07:23:21.911399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.698 qpair failed and we were unable to recover it. 00:27:17.698 [2024-11-20 07:23:21.911576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.698 [2024-11-20 07:23:21.911609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.698 qpair failed and we were unable to recover it. 00:27:17.698 [2024-11-20 07:23:21.911817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.698 [2024-11-20 07:23:21.911848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.698 qpair failed and we were unable to recover it. 00:27:17.698 [2024-11-20 07:23:21.912038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.698 [2024-11-20 07:23:21.912070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.698 qpair failed and we were unable to recover it. 00:27:17.698 [2024-11-20 07:23:21.912201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.698 [2024-11-20 07:23:21.912232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.698 qpair failed and we were unable to recover it. 00:27:17.698 [2024-11-20 07:23:21.912407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.698 [2024-11-20 07:23:21.912439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.698 qpair failed and we were unable to recover it. 00:27:17.698 [2024-11-20 07:23:21.912555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.698 [2024-11-20 07:23:21.912586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.698 qpair failed and we were unable to recover it. 00:27:17.698 [2024-11-20 07:23:21.912769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.698 [2024-11-20 07:23:21.912800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.698 qpair failed and we were unable to recover it. 00:27:17.698 [2024-11-20 07:23:21.912983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.698 [2024-11-20 07:23:21.913016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.698 qpair failed and we were unable to recover it. 00:27:17.698 [2024-11-20 07:23:21.913120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.698 [2024-11-20 07:23:21.913151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.698 qpair failed and we were unable to recover it. 00:27:17.698 [2024-11-20 07:23:21.913426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.698 [2024-11-20 07:23:21.913457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.698 qpair failed and we were unable to recover it. 00:27:17.698 [2024-11-20 07:23:21.913642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.698 [2024-11-20 07:23:21.913674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.698 qpair failed and we were unable to recover it. 00:27:17.698 [2024-11-20 07:23:21.913793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.698 [2024-11-20 07:23:21.913825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.698 qpair failed and we were unable to recover it. 00:27:17.698 [2024-11-20 07:23:21.914016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.698 [2024-11-20 07:23:21.914049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.698 qpair failed and we were unable to recover it. 00:27:17.698 [2024-11-20 07:23:21.914160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.698 [2024-11-20 07:23:21.914192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.698 qpair failed and we were unable to recover it. 00:27:17.698 [2024-11-20 07:23:21.914380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.698 [2024-11-20 07:23:21.914411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.698 qpair failed and we were unable to recover it. 00:27:17.698 [2024-11-20 07:23:21.914545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.698 [2024-11-20 07:23:21.914577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.698 qpair failed and we were unable to recover it. 00:27:17.698 [2024-11-20 07:23:21.914697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.698 [2024-11-20 07:23:21.914728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.698 qpair failed and we were unable to recover it. 00:27:17.698 [2024-11-20 07:23:21.914922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.698 [2024-11-20 07:23:21.914966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.698 qpair failed and we were unable to recover it. 00:27:17.699 [2024-11-20 07:23:21.915225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.699 [2024-11-20 07:23:21.915256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.699 qpair failed and we were unable to recover it. 00:27:17.699 [2024-11-20 07:23:21.915371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.699 [2024-11-20 07:23:21.915406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.699 qpair failed and we were unable to recover it. 00:27:17.699 [2024-11-20 07:23:21.915596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.699 [2024-11-20 07:23:21.915628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.699 qpair failed and we were unable to recover it. 00:27:17.699 [2024-11-20 07:23:21.915815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.699 [2024-11-20 07:23:21.915846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.699 qpair failed and we were unable to recover it. 00:27:17.699 [2024-11-20 07:23:21.915994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.699 [2024-11-20 07:23:21.916030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.699 qpair failed and we were unable to recover it. 00:27:17.699 [2024-11-20 07:23:21.916217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.699 [2024-11-20 07:23:21.916248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.699 qpair failed and we were unable to recover it. 00:27:17.699 [2024-11-20 07:23:21.916370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.699 [2024-11-20 07:23:21.916403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.699 qpair failed and we were unable to recover it. 00:27:17.699 [2024-11-20 07:23:21.916590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.699 [2024-11-20 07:23:21.916623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.699 qpair failed and we were unable to recover it. 00:27:17.699 [2024-11-20 07:23:21.916741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.699 [2024-11-20 07:23:21.916772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.699 qpair failed and we were unable to recover it. 00:27:17.699 [2024-11-20 07:23:21.916942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.699 [2024-11-20 07:23:21.916982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.699 qpair failed and we were unable to recover it. 00:27:17.699 [2024-11-20 07:23:21.917103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.699 [2024-11-20 07:23:21.917136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.699 qpair failed and we were unable to recover it. 00:27:17.699 [2024-11-20 07:23:21.917315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.699 [2024-11-20 07:23:21.917347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.699 qpair failed and we were unable to recover it. 00:27:17.699 [2024-11-20 07:23:21.917464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.699 [2024-11-20 07:23:21.917496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.699 qpair failed and we were unable to recover it. 00:27:17.699 [2024-11-20 07:23:21.917681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.699 [2024-11-20 07:23:21.917714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.699 qpair failed and we were unable to recover it. 00:27:17.699 [2024-11-20 07:23:21.917829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.699 [2024-11-20 07:23:21.917866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.699 qpair failed and we were unable to recover it. 00:27:17.699 [2024-11-20 07:23:21.918043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.699 [2024-11-20 07:23:21.918077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.699 qpair failed and we were unable to recover it. 00:27:17.699 [2024-11-20 07:23:21.918203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.699 [2024-11-20 07:23:21.918236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.699 qpair failed and we were unable to recover it. 00:27:17.699 [2024-11-20 07:23:21.918457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.699 [2024-11-20 07:23:21.918488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.699 qpair failed and we were unable to recover it. 00:27:17.699 [2024-11-20 07:23:21.918608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.699 [2024-11-20 07:23:21.918640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.699 qpair failed and we were unable to recover it. 00:27:17.699 [2024-11-20 07:23:21.918819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.699 [2024-11-20 07:23:21.918850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.699 qpair failed and we were unable to recover it. 00:27:17.699 [2024-11-20 07:23:21.919027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.699 [2024-11-20 07:23:21.919060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.699 qpair failed and we were unable to recover it. 00:27:17.699 [2024-11-20 07:23:21.919260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.699 [2024-11-20 07:23:21.919291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.699 qpair failed and we were unable to recover it. 00:27:17.699 [2024-11-20 07:23:21.919416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.700 [2024-11-20 07:23:21.919447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.700 qpair failed and we were unable to recover it. 00:27:17.700 [2024-11-20 07:23:21.919563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.700 [2024-11-20 07:23:21.919595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.700 qpair failed and we were unable to recover it. 00:27:17.700 [2024-11-20 07:23:21.919724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.700 [2024-11-20 07:23:21.919756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.700 qpair failed and we were unable to recover it. 00:27:17.700 [2024-11-20 07:23:21.919957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.700 [2024-11-20 07:23:21.919988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.700 qpair failed and we were unable to recover it. 00:27:17.700 [2024-11-20 07:23:21.920117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.700 [2024-11-20 07:23:21.920149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.700 qpair failed and we were unable to recover it. 00:27:17.700 [2024-11-20 07:23:21.920264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.700 [2024-11-20 07:23:21.920296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.700 qpair failed and we were unable to recover it. 00:27:17.700 [2024-11-20 07:23:21.920420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.700 [2024-11-20 07:23:21.920452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.700 qpair failed and we were unable to recover it. 00:27:17.700 [2024-11-20 07:23:21.920640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.700 [2024-11-20 07:23:21.920671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.700 qpair failed and we were unable to recover it. 00:27:17.700 [2024-11-20 07:23:21.920777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.700 [2024-11-20 07:23:21.920809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.700 qpair failed and we were unable to recover it. 00:27:17.700 [2024-11-20 07:23:21.920984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.700 [2024-11-20 07:23:21.921016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.700 qpair failed and we were unable to recover it. 00:27:17.700 [2024-11-20 07:23:21.921186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.700 [2024-11-20 07:23:21.921217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.700 qpair failed and we were unable to recover it. 00:27:17.700 [2024-11-20 07:23:21.921320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.700 [2024-11-20 07:23:21.921352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.700 qpair failed and we were unable to recover it. 00:27:17.700 [2024-11-20 07:23:21.921525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.700 [2024-11-20 07:23:21.921556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.700 qpair failed and we were unable to recover it. 00:27:17.700 [2024-11-20 07:23:21.921734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.700 [2024-11-20 07:23:21.921764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.700 qpair failed and we were unable to recover it. 00:27:17.700 [2024-11-20 07:23:21.922053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.700 [2024-11-20 07:23:21.922087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.700 qpair failed and we were unable to recover it. 00:27:17.700 [2024-11-20 07:23:21.922306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.700 [2024-11-20 07:23:21.922337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.700 qpair failed and we were unable to recover it. 00:27:17.700 [2024-11-20 07:23:21.922436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.700 [2024-11-20 07:23:21.922468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.700 qpair failed and we were unable to recover it. 00:27:17.700 [2024-11-20 07:23:21.922661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.700 [2024-11-20 07:23:21.922693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.700 qpair failed and we were unable to recover it. 00:27:17.700 [2024-11-20 07:23:21.922874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.700 [2024-11-20 07:23:21.922904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.700 qpair failed and we were unable to recover it. 00:27:17.700 [2024-11-20 07:23:21.923087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.700 [2024-11-20 07:23:21.923126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.700 qpair failed and we were unable to recover it. 00:27:17.700 [2024-11-20 07:23:21.923234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.700 [2024-11-20 07:23:21.923266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.700 qpair failed and we were unable to recover it. 00:27:17.700 [2024-11-20 07:23:21.923386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.700 [2024-11-20 07:23:21.923418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.700 qpair failed and we were unable to recover it. 00:27:17.700 [2024-11-20 07:23:21.923537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.700 [2024-11-20 07:23:21.923569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.700 qpair failed and we were unable to recover it. 00:27:17.700 [2024-11-20 07:23:21.923759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.700 [2024-11-20 07:23:21.923790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.700 qpair failed and we were unable to recover it. 00:27:17.700 [2024-11-20 07:23:21.923896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.700 [2024-11-20 07:23:21.923928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.700 qpair failed and we were unable to recover it. 00:27:17.700 [2024-11-20 07:23:21.924094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.700 [2024-11-20 07:23:21.924126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.700 qpair failed and we were unable to recover it. 00:27:17.700 [2024-11-20 07:23:21.924316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.700 [2024-11-20 07:23:21.924348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.700 qpair failed and we were unable to recover it. 00:27:17.700 [2024-11-20 07:23:21.924561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.700 [2024-11-20 07:23:21.924592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.700 qpair failed and we were unable to recover it. 00:27:17.700 [2024-11-20 07:23:21.924736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.700 [2024-11-20 07:23:21.924768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.700 qpair failed and we were unable to recover it. 00:27:17.700 [2024-11-20 07:23:21.924957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.700 [2024-11-20 07:23:21.924990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.700 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 07:23:21.925108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 07:23:21.925140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 07:23:21.925257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 07:23:21.925288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 07:23:21.925414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 07:23:21.925445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 07:23:21.925635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 07:23:21.925667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 07:23:21.925785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 07:23:21.925816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 07:23:21.925927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 07:23:21.925967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 07:23:21.926140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 07:23:21.926171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 07:23:21.926286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 07:23:21.926317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 07:23:21.926445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 07:23:21.926476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 07:23:21.926598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 07:23:21.926629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 07:23:21.926747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 07:23:21.926778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 07:23:21.927021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 07:23:21.927055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 07:23:21.927176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 07:23:21.927207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 07:23:21.927332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 07:23:21.927363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 07:23:21.927481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 07:23:21.927512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 07:23:21.927700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 07:23:21.927731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 07:23:21.927840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 07:23:21.927871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 07:23:21.927992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 07:23:21.928026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 07:23:21.928193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 07:23:21.928224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 07:23:21.928415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 07:23:21.928446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 07:23:21.928635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 07:23:21.928666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 07:23:21.928778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 07:23:21.928809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 07:23:21.928990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 07:23:21.929023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 07:23:21.929132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 07:23:21.929163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 07:23:21.929336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 07:23:21.929368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 07:23:21.929480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 07:23:21.929512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 07:23:21.929697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 07:23:21.929728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 07:23:21.929897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 07:23:21.929929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 07:23:21.930064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.701 [2024-11-20 07:23:21.930096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.701 qpair failed and we were unable to recover it. 00:27:17.701 [2024-11-20 07:23:21.930225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 07:23:21.930262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 07:23:21.930432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 07:23:21.930463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 07:23:21.930578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 07:23:21.930609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 07:23:21.930730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 07:23:21.930762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 07:23:21.930888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 07:23:21.930919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 07:23:21.931128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 07:23:21.931159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 07:23:21.931372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 07:23:21.931404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 07:23:21.931526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 07:23:21.931557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 07:23:21.931666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 07:23:21.931697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 07:23:21.931934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 07:23:21.931980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 07:23:21.932157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 07:23:21.932189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 07:23:21.932367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 07:23:21.932400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 07:23:21.932588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 07:23:21.932619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 07:23:21.932735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 07:23:21.932767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 07:23:21.932882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 07:23:21.932913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 07:23:21.933117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 07:23:21.933150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 07:23:21.933396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 07:23:21.933427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 07:23:21.933538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 07:23:21.933569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 07:23:21.933751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 07:23:21.933782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 07:23:21.933898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 07:23:21.933929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 07:23:21.934109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 07:23:21.934142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 07:23:21.934270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 07:23:21.934300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 07:23:21.934499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 07:23:21.934530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 07:23:21.934642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 07:23:21.934674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 07:23:21.934797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 07:23:21.934828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 07:23:21.935025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 07:23:21.935058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 07:23:21.935168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 07:23:21.935200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 07:23:21.935391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 07:23:21.935423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 07:23:21.935597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 07:23:21.935629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 07:23:21.935755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 07:23:21.935786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 07:23:21.935889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.702 [2024-11-20 07:23:21.935920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.702 qpair failed and we were unable to recover it. 00:27:17.702 [2024-11-20 07:23:21.936146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 07:23:21.936179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 07:23:21.936351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 07:23:21.936382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 07:23:21.936644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 07:23:21.936675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 07:23:21.936781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 07:23:21.936812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 07:23:21.936924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 07:23:21.936961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 07:23:21.937079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 07:23:21.937110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 07:23:21.937234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 07:23:21.937265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 07:23:21.937442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 07:23:21.937473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 07:23:21.937590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 07:23:21.937621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 07:23:21.937805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 07:23:21.937843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 07:23:21.937973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 07:23:21.938006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 07:23:21.938208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 07:23:21.938240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 07:23:21.938429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 07:23:21.938461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 07:23:21.938700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 07:23:21.938731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 07:23:21.938867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 07:23:21.938897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 07:23:21.939087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 07:23:21.939120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 07:23:21.939240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 07:23:21.939272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 07:23:21.939486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 07:23:21.939518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 07:23:21.939703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 07:23:21.939736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 07:23:21.939855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 07:23:21.939886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 07:23:21.940066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 07:23:21.940099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 07:23:21.940224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 07:23:21.940255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 07:23:21.940432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 07:23:21.940464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 07:23:21.940596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 07:23:21.940627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 07:23:21.940800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 07:23:21.940832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 07:23:21.940940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 07:23:21.940986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 07:23:21.941189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 07:23:21.941219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 07:23:21.941331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 07:23:21.941363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 07:23:21.941478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 07:23:21.941510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 07:23:21.941635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 07:23:21.941666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 07:23:21.941849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 07:23:21.941880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.703 [2024-11-20 07:23:21.942051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.703 [2024-11-20 07:23:21.942084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.703 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 07:23:21.942300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 07:23:21.942331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 07:23:21.942541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 07:23:21.942572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 07:23:21.942678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 07:23:21.942709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 07:23:21.942886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 07:23:21.942920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 07:23:21.943128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 07:23:21.943161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 07:23:21.943265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 07:23:21.943297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 07:23:21.943400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 07:23:21.943430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 07:23:21.943531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 07:23:21.943564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 07:23:21.943753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 07:23:21.943784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 07:23:21.943911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 07:23:21.943942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 07:23:21.944092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 07:23:21.944124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 07:23:21.944365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 07:23:21.944395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 07:23:21.944518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 07:23:21.944549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 07:23:21.944675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 07:23:21.944705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 07:23:21.944829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 07:23:21.944860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 07:23:21.945050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 07:23:21.945083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 07:23:21.945270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 07:23:21.945301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 07:23:21.945485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 07:23:21.945521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 07:23:21.945736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 07:23:21.945767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 07:23:21.945894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 07:23:21.945924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 07:23:21.946113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 07:23:21.946144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 07:23:21.946276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 07:23:21.946307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 07:23:21.946425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 07:23:21.946456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 07:23:21.946579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 07:23:21.946611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 07:23:21.946811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 07:23:21.946841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 07:23:21.946971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 07:23:21.947005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 07:23:21.947135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 07:23:21.947166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 07:23:21.947349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 07:23:21.947380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 07:23:21.947553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 07:23:21.947583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 07:23:21.947688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 07:23:21.947719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 07:23:21.947841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.704 [2024-11-20 07:23:21.947873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.704 qpair failed and we were unable to recover it. 00:27:17.704 [2024-11-20 07:23:21.948007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 07:23:21.948040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 07:23:21.948247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 07:23:21.948279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 07:23:21.948459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 07:23:21.948491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 07:23:21.948672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 07:23:21.948703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 07:23:21.948889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 07:23:21.948920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 07:23:21.949036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 07:23:21.949066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 07:23:21.949323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 07:23:21.949355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 07:23:21.949472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 07:23:21.949503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 07:23:21.949611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 07:23:21.949640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 07:23:21.949761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 07:23:21.949793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 07:23:21.949993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 07:23:21.950026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 07:23:21.950203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 07:23:21.950235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 07:23:21.950360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 07:23:21.950391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 07:23:21.950572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 07:23:21.950604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 07:23:21.950723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 07:23:21.950753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 07:23:21.950960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 07:23:21.950993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 07:23:21.951164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 07:23:21.951195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 07:23:21.951309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 07:23:21.951340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 07:23:21.951579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 07:23:21.951610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 07:23:21.951783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 07:23:21.951814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 07:23:21.951942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 07:23:21.951985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 07:23:21.952108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 07:23:21.952140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 07:23:21.952313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 07:23:21.952344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 07:23:21.952530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 07:23:21.952561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 07:23:21.952824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 07:23:21.952855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.705 [2024-11-20 07:23:21.952982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.705 [2024-11-20 07:23:21.953016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.705 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 07:23:21.953148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 07:23:21.953187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 07:23:21.953311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 07:23:21.953342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 07:23:21.953519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 07:23:21.953550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 07:23:21.953680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 07:23:21.953711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 07:23:21.953903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 07:23:21.953935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 07:23:21.954056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 07:23:21.954088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 07:23:21.954332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 07:23:21.954363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 07:23:21.954485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 07:23:21.954517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 07:23:21.954710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 07:23:21.954741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 07:23:21.954844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 07:23:21.954875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 07:23:21.954992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 07:23:21.955026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 07:23:21.955143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 07:23:21.955175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 07:23:21.955301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 07:23:21.955332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 07:23:21.955519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 07:23:21.955550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 07:23:21.955737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 07:23:21.955768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 07:23:21.956014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 07:23:21.956047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 07:23:21.956181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 07:23:21.956211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 07:23:21.956337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 07:23:21.956368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 07:23:21.956547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 07:23:21.956578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 07:23:21.956688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 07:23:21.956719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 07:23:21.956845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 07:23:21.956876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 07:23:21.957117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 07:23:21.957152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 07:23:21.957327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 07:23:21.957358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 07:23:21.957547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 07:23:21.957578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 07:23:21.957746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 07:23:21.957778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 07:23:21.957893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 07:23:21.957925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 07:23:21.958144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 07:23:21.958193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 07:23:21.958325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 07:23:21.958357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 07:23:21.958545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 07:23:21.958577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 07:23:21.958697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 07:23:21.958736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 07:23:21.958847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 07:23:21.958879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.706 qpair failed and we were unable to recover it. 00:27:17.706 [2024-11-20 07:23:21.959124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.706 [2024-11-20 07:23:21.959173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 07:23:21.959393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 07:23:21.959435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 07:23:21.959657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 07:23:21.959690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 07:23:21.959896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 07:23:21.959928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 07:23:21.960071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 07:23:21.960103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 07:23:21.960242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 07:23:21.960273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 07:23:21.960454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 07:23:21.960487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 07:23:21.960665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 07:23:21.960697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 07:23:21.960814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 07:23:21.960846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 07:23:21.961030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 07:23:21.961072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 07:23:21.961201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 07:23:21.961232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 07:23:21.961354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 07:23:21.961386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 07:23:21.961510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 07:23:21.961542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 07:23:21.961681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 07:23:21.961712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 07:23:21.961834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 07:23:21.961865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 07:23:21.962052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 07:23:21.962085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 07:23:21.962260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 07:23:21.962292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 07:23:21.962515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 07:23:21.962547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 07:23:21.962675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 07:23:21.962706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 07:23:21.962899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 07:23:21.962931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 07:23:21.963117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 07:23:21.963149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 07:23:21.963286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 07:23:21.963318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 07:23:21.963420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 07:23:21.963451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 07:23:21.963582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 07:23:21.963614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 07:23:21.963815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 07:23:21.963846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 07:23:21.964113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 07:23:21.964146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 07:23:21.964281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 07:23:21.964312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 07:23:21.964427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 07:23:21.964458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 07:23:21.964642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 07:23:21.964673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 07:23:21.964806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 07:23:21.964837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 07:23:21.964972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 07:23:21.965005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.707 [2024-11-20 07:23:21.965204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.707 [2024-11-20 07:23:21.965236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.707 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 07:23:21.965352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 07:23:21.965384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 07:23:21.965507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 07:23:21.965537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 07:23:21.965713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 07:23:21.965744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 07:23:21.965875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 07:23:21.965907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 07:23:21.966053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 07:23:21.966087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 07:23:21.966261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 07:23:21.966292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 07:23:21.966421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 07:23:21.966452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 07:23:21.966553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 07:23:21.966584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 07:23:21.966719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 07:23:21.966750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 07:23:21.966928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 07:23:21.967649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 07:23:21.968127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 07:23:21.968165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 07:23:21.968365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 07:23:21.968398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 07:23:21.968574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 07:23:21.968605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 07:23:21.968725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 07:23:21.968757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 07:23:21.968896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 07:23:21.968927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 07:23:21.969072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 07:23:21.969105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 07:23:21.969316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 07:23:21.969348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 07:23:21.969584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 07:23:21.969624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 07:23:21.969746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 07:23:21.969778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 07:23:21.969946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 07:23:21.969991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 07:23:21.970171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 07:23:21.970203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 07:23:21.970379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 07:23:21.970411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 07:23:21.970540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 07:23:21.970572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 07:23:21.970693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 07:23:21.970725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 07:23:21.970838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 07:23:21.970870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 07:23:21.971006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 07:23:21.971041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 07:23:21.971304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 07:23:21.971335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 07:23:21.971450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 07:23:21.971482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 07:23:21.971587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 07:23:21.971619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 07:23:21.971803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 07:23:21.971834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.708 [2024-11-20 07:23:21.971967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.708 [2024-11-20 07:23:21.972000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.708 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 07:23:21.972134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 07:23:21.972165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 07:23:21.972288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 07:23:21.972319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 07:23:21.972492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 07:23:21.972523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 07:23:21.972629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 07:23:21.972661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 07:23:21.972839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 07:23:21.972870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 07:23:21.973071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 07:23:21.973105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 07:23:21.973285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 07:23:21.973317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 07:23:21.973500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 07:23:21.973533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 07:23:21.973647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 07:23:21.973679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 07:23:21.973786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 07:23:21.973817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 07:23:21.973924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 07:23:21.974003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 07:23:21.974183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 07:23:21.974215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 07:23:21.974345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 07:23:21.974375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 07:23:21.974485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 07:23:21.974516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 07:23:21.974686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 07:23:21.974718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 07:23:21.974924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 07:23:21.974966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 07:23:21.975072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 07:23:21.975103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 07:23:21.975284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 07:23:21.975316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 07:23:21.975431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 07:23:21.975462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 07:23:21.975593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 07:23:21.975624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 07:23:21.975751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 07:23:21.975782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 07:23:21.976040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 07:23:21.976074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 07:23:21.976180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 07:23:21.976211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 07:23:21.976326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 07:23:21.976357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 07:23:21.976462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 07:23:21.976493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 07:23:21.976696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 07:23:21.976727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 07:23:21.976832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 07:23:21.976870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 07:23:21.977002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 07:23:21.977035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 07:23:21.977157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 07:23:21.977187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.709 qpair failed and we were unable to recover it. 00:27:17.709 [2024-11-20 07:23:21.977367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.709 [2024-11-20 07:23:21.977398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 07:23:21.977524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 07:23:21.977556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 07:23:21.977734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 07:23:21.977765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 07:23:21.977881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 07:23:21.977912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 07:23:21.978026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 07:23:21.978058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 07:23:21.978170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 07:23:21.978201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 07:23:21.978399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 07:23:21.978429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 07:23:21.978551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 07:23:21.978582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 07:23:21.978693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 07:23:21.978725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 07:23:21.978856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 07:23:21.978888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 07:23:21.979069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 07:23:21.979102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 07:23:21.979319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 07:23:21.979350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 07:23:21.979472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 07:23:21.979503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 07:23:21.979689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 07:23:21.979720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 07:23:21.979838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 07:23:21.979869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 07:23:21.979985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 07:23:21.980018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 07:23:21.980134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 07:23:21.980166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 07:23:21.980288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 07:23:21.980320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 07:23:21.980429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 07:23:21.980460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 07:23:21.980572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 07:23:21.980603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 07:23:21.980796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 07:23:21.980827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 07:23:21.981007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.710 [2024-11-20 07:23:21.981041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.710 qpair failed and we were unable to recover it. 00:27:17.710 [2024-11-20 07:23:21.981151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 07:23:21.981182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 07:23:21.981296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 07:23:21.981330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 07:23:21.981520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 07:23:21.981597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 07:23:21.981793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 07:23:21.981828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 07:23:21.981941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 07:23:21.981988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 07:23:21.982114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 07:23:21.982147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 07:23:21.982261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 07:23:21.982293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 07:23:21.982403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 07:23:21.982435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 07:23:21.982608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 07:23:21.982640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 07:23:21.982764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 07:23:21.982796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 07:23:21.982899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 07:23:21.982930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 07:23:21.983117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 07:23:21.983150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 07:23:21.983269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 07:23:21.983300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 07:23:21.983415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 07:23:21.983445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 07:23:21.983549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 07:23:21.983581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 07:23:21.983694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 07:23:21.983735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 07:23:21.983906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 07:23:21.983936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 07:23:21.984076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 07:23:21.984109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 07:23:21.984229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 07:23:21.984260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 07:23:21.984360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 07:23:21.984391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 07:23:21.984500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 07:23:21.984531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 07:23:21.984659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 07:23:21.984690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 07:23:21.984804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 07:23:21.984835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 07:23:21.984941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 07:23:21.984990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 07:23:21.985107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 07:23:21.985138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 07:23:21.985242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 07:23:21.985272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 07:23:21.985458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 07:23:21.985488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 07:23:21.985671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 07:23:21.985702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 07:23:21.985880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 07:23:21.985911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 07:23:21.986121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 07:23:21.986157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 07:23:21.986337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.711 [2024-11-20 07:23:21.986369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.711 qpair failed and we were unable to recover it. 00:27:17.711 [2024-11-20 07:23:21.986561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 07:23:21.986593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 07:23:21.986704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 07:23:21.986734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 07:23:21.986932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 07:23:21.986991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 07:23:21.987106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 07:23:21.987137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 07:23:21.987273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 07:23:21.987305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 07:23:21.987419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 07:23:21.987450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 07:23:21.987617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 07:23:21.987648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 07:23:21.987828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 07:23:21.987859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 07:23:21.988042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 07:23:21.988075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 07:23:21.988265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 07:23:21.988297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 07:23:21.988473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 07:23:21.988505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 07:23:21.988691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 07:23:21.988723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 07:23:21.988896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 07:23:21.988927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 07:23:21.989061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 07:23:21.989093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 07:23:21.989218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 07:23:21.989249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 07:23:21.989420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 07:23:21.989452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 07:23:21.989567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 07:23:21.989598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 07:23:21.989717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 07:23:21.989748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 07:23:21.989941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 07:23:21.989980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 07:23:21.990089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 07:23:21.990119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 07:23:21.990250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 07:23:21.990281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 07:23:21.990393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 07:23:21.990424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 07:23:21.990550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 07:23:21.990581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 07:23:21.990701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 07:23:21.990732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 07:23:21.990839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 07:23:21.990876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 07:23:21.990988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 07:23:21.991022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.712 [2024-11-20 07:23:21.991200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.712 [2024-11-20 07:23:21.991232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.712 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 07:23:21.991335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 07:23:21.991367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 07:23:21.991490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 07:23:21.991521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 07:23:21.991645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 07:23:21.991677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 07:23:21.991791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 07:23:21.991823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 07:23:21.991968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 07:23:21.992001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 07:23:21.992125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 07:23:21.992156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 07:23:21.992343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 07:23:21.992374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 07:23:21.992568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 07:23:21.992599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 07:23:21.992707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 07:23:21.992738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 07:23:21.992924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 07:23:21.992980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 07:23:21.993090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 07:23:21.993121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 07:23:21.993308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 07:23:21.993339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 07:23:21.993457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 07:23:21.993489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 07:23:21.993688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 07:23:21.993720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 07:23:21.993986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 07:23:21.994019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 07:23:21.994152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 07:23:21.994184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 07:23:21.994312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 07:23:21.994343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 07:23:21.994463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 07:23:21.994494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 07:23:21.994622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 07:23:21.994653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 07:23:21.994844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 07:23:21.994874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 07:23:21.995001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 07:23:21.995035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 07:23:21.995142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 07:23:21.995174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 07:23:21.995413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 07:23:21.995444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 07:23:21.995563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 07:23:21.995594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 07:23:21.995773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 07:23:21.995849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 07:23:21.995998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 07:23:21.996037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.713 [2024-11-20 07:23:21.996156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.713 [2024-11-20 07:23:21.996190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.713 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 07:23:21.996395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 07:23:21.996428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 07:23:21.996632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 07:23:21.996665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 07:23:21.996838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 07:23:21.996872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 07:23:21.997069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 07:23:21.997103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 07:23:21.997297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 07:23:21.997329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 07:23:21.997490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 07:23:21.997523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 07:23:21.997672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 07:23:21.997704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 07:23:21.997832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 07:23:21.997863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 07:23:21.997986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 07:23:21.998019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 07:23:21.998197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 07:23:21.998229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 07:23:21.998364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 07:23:21.998405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 07:23:21.998635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 07:23:21.998667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 07:23:21.998865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 07:23:21.998898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 07:23:21.999097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 07:23:21.999131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 07:23:21.999249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 07:23:21.999281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 07:23:21.999393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 07:23:21.999424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 07:23:21.999539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 07:23:21.999570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 07:23:21.999833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 07:23:21.999866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 07:23:22.000044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 07:23:22.000079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 07:23:22.000266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 07:23:22.000299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 07:23:22.000499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 07:23:22.000531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 07:23:22.000792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 07:23:22.000824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 07:23:22.000941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 07:23:22.000984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 07:23:22.001259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 07:23:22.001292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 07:23:22.001414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 07:23:22.001446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 07:23:22.001621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 07:23:22.001653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 07:23:22.001840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 07:23:22.001873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 07:23:22.001990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.714 [2024-11-20 07:23:22.002023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.714 qpair failed and we were unable to recover it. 00:27:17.714 [2024-11-20 07:23:22.002133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 07:23:22.002164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 07:23:22.002360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 07:23:22.002392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 07:23:22.002564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 07:23:22.002595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 07:23:22.002785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 07:23:22.002817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 07:23:22.003000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 07:23:22.003034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 07:23:22.003221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 07:23:22.003252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 07:23:22.003408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 07:23:22.003440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 07:23:22.003615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 07:23:22.003648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 07:23:22.003754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 07:23:22.003786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 07:23:22.004044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 07:23:22.004116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 07:23:22.004275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 07:23:22.004309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 07:23:22.004491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 07:23:22.004523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 07:23:22.004644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 07:23:22.004675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 07:23:22.004863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 07:23:22.004894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 07:23:22.005082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 07:23:22.005115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 07:23:22.005296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 07:23:22.005327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 07:23:22.005532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 07:23:22.005564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 07:23:22.005674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 07:23:22.005705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 07:23:22.005890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 07:23:22.005921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 07:23:22.006055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 07:23:22.006086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 07:23:22.006271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 07:23:22.006301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 07:23:22.006470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 07:23:22.006500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 07:23:22.006636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 07:23:22.006682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 07:23:22.006900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.715 [2024-11-20 07:23:22.006931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.715 qpair failed and we were unable to recover it. 00:27:17.715 [2024-11-20 07:23:22.007057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 07:23:22.007089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 07:23:22.007208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 07:23:22.007238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 07:23:22.007427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 07:23:22.007458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 07:23:22.007581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 07:23:22.007612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 07:23:22.007743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 07:23:22.007774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 07:23:22.007899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 07:23:22.007930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 07:23:22.008113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 07:23:22.008145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 07:23:22.008249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 07:23:22.008282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 07:23:22.008409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 07:23:22.008439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 07:23:22.008571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 07:23:22.008602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 07:23:22.008713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 07:23:22.008745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 07:23:22.008863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 07:23:22.008893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 07:23:22.009090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 07:23:22.009124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 07:23:22.009364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 07:23:22.009396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 07:23:22.009531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 07:23:22.009562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 07:23:22.009747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 07:23:22.009779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 07:23:22.009904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 07:23:22.009936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 07:23:22.010117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 07:23:22.010149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 07:23:22.010270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 07:23:22.010301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 07:23:22.010480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 07:23:22.010511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 07:23:22.010699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 07:23:22.010730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 07:23:22.010844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 07:23:22.010875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 07:23:22.011069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 07:23:22.011101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 07:23:22.011214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 07:23:22.011245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 07:23:22.011365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 07:23:22.011396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 07:23:22.011581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 07:23:22.011654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 07:23:22.011860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 07:23:22.011897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 07:23:22.012046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 07:23:22.012080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 07:23:22.012287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 07:23:22.012319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 07:23:22.012426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.716 [2024-11-20 07:23:22.012457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.716 qpair failed and we were unable to recover it. 00:27:17.716 [2024-11-20 07:23:22.012725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 07:23:22.012756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 07:23:22.012886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 07:23:22.012917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 07:23:22.013113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 07:23:22.013146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 07:23:22.013322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 07:23:22.013353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 07:23:22.013538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 07:23:22.013569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 07:23:22.013693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 07:23:22.013725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 07:23:22.013908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 07:23:22.013939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 07:23:22.014165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 07:23:22.014197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 07:23:22.014301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 07:23:22.014331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 07:23:22.014478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 07:23:22.014510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 07:23:22.014680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 07:23:22.014712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 07:23:22.014904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 07:23:22.014935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 07:23:22.015134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 07:23:22.015166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 07:23:22.015346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 07:23:22.015378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 07:23:22.015493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 07:23:22.015523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 07:23:22.015643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 07:23:22.015674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 07:23:22.015863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 07:23:22.015893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 07:23:22.016087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 07:23:22.016119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 07:23:22.016304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 07:23:22.016335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 07:23:22.016460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.717 [2024-11-20 07:23:22.016491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.717 qpair failed and we were unable to recover it. 00:27:17.717 [2024-11-20 07:23:22.016687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 07:23:22.016719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 07:23:22.016894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 07:23:22.016925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 07:23:22.017044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 07:23:22.017082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 07:23:22.017324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 07:23:22.017356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 07:23:22.017522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 07:23:22.017554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 07:23:22.017813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 07:23:22.017845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 07:23:22.018016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 07:23:22.018048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 07:23:22.018309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 07:23:22.018341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 07:23:22.018454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 07:23:22.018485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 07:23:22.018605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 07:23:22.018636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 07:23:22.018758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 07:23:22.018789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 07:23:22.019001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 07:23:22.019033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 07:23:22.019177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 07:23:22.019208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 07:23:22.019392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 07:23:22.019424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 07:23:22.019608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 07:23:22.019640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 07:23:22.019776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 07:23:22.019808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 07:23:22.020021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 07:23:22.020054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 07:23:22.020239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 07:23:22.020271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 07:23:22.020454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 07:23:22.020485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 07:23:22.020614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 07:23:22.020647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 07:23:22.020909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 07:23:22.020941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 07:23:22.021131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 07:23:22.021163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 07:23:22.021274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 07:23:22.021305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 07:23:22.021483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 07:23:22.021514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 07:23:22.021642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 07:23:22.021673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.718 [2024-11-20 07:23:22.021793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.718 [2024-11-20 07:23:22.021824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.718 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 07:23:22.021957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 07:23:22.021990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 07:23:22.022098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 07:23:22.022129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 07:23:22.022314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 07:23:22.022346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 07:23:22.022459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 07:23:22.022496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 07:23:22.022680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 07:23:22.022712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 07:23:22.022894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 07:23:22.022925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 07:23:22.023048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 07:23:22.023081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 07:23:22.023202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 07:23:22.023233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 07:23:22.023404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 07:23:22.023435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 07:23:22.023539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 07:23:22.023571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 07:23:22.023758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 07:23:22.023790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 07:23:22.023982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 07:23:22.024016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 07:23:22.024167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 07:23:22.024199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 07:23:22.024447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 07:23:22.024477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 07:23:22.024652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 07:23:22.024683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 07:23:22.024865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 07:23:22.024896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 07:23:22.025033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 07:23:22.025065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 07:23:22.025191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 07:23:22.025222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 07:23:22.025339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 07:23:22.025371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 07:23:22.025622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 07:23:22.025653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 07:23:22.025775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 07:23:22.025806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 07:23:22.025981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 07:23:22.026014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 07:23:22.026202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 07:23:22.026233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 07:23:22.026348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 07:23:22.026379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 07:23:22.026516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 07:23:22.026547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 07:23:22.026721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 07:23:22.026753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 07:23:22.026934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 07:23:22.026978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 07:23:22.027150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 07:23:22.027181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 07:23:22.027395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 07:23:22.027427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 07:23:22.027531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 07:23:22.027562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 07:23:22.027678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.719 [2024-11-20 07:23:22.027709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.719 qpair failed and we were unable to recover it. 00:27:17.719 [2024-11-20 07:23:22.027925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 07:23:22.027988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 07:23:22.028191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 07:23:22.028223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 07:23:22.028342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 07:23:22.028373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 07:23:22.028575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 07:23:22.028607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 07:23:22.028788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 07:23:22.028819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 07:23:22.028998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 07:23:22.029035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 07:23:22.029151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 07:23:22.029184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 07:23:22.029321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 07:23:22.029353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 07:23:22.029545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 07:23:22.029577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 07:23:22.029698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 07:23:22.029730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 07:23:22.030022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 07:23:22.030055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 07:23:22.030242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 07:23:22.030274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 07:23:22.030464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 07:23:22.030497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 07:23:22.030622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 07:23:22.030655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 07:23:22.030829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 07:23:22.030861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 07:23:22.031053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 07:23:22.031087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 07:23:22.031202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 07:23:22.031234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 07:23:22.031350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 07:23:22.031382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 07:23:22.031564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 07:23:22.031596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 07:23:22.031782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 07:23:22.031814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 07:23:22.031998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 07:23:22.032030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 07:23:22.032153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 07:23:22.032185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 07:23:22.032301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 07:23:22.032332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 07:23:22.032451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 07:23:22.032482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 07:23:22.032694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 07:23:22.032727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 07:23:22.032937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 07:23:22.032979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 07:23:22.033238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 07:23:22.033270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.720 qpair failed and we were unable to recover it. 00:27:17.720 [2024-11-20 07:23:22.033384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.720 [2024-11-20 07:23:22.033416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 07:23:22.033539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 07:23:22.033570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 07:23:22.033811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 07:23:22.033843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 07:23:22.034020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 07:23:22.034054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 07:23:22.034237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 07:23:22.034269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 07:23:22.034381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 07:23:22.034413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 07:23:22.034583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 07:23:22.034615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 07:23:22.034786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 07:23:22.034817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 07:23:22.035018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 07:23:22.035051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 07:23:22.035225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 07:23:22.035257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 07:23:22.035431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 07:23:22.035462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 07:23:22.035600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 07:23:22.035631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 07:23:22.035773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 07:23:22.035803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 07:23:22.035917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 07:23:22.035984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 07:23:22.036109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 07:23:22.036141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 07:23:22.036312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 07:23:22.036344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 07:23:22.036475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 07:23:22.036507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 07:23:22.036690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 07:23:22.036722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 07:23:22.036901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 07:23:22.036932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 07:23:22.037085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 07:23:22.037117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 07:23:22.037355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 07:23:22.037387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 07:23:22.037504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 07:23:22.037535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 07:23:22.037657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 07:23:22.037688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 07:23:22.037822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 07:23:22.037853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 07:23:22.038037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 07:23:22.038070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 07:23:22.038256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 07:23:22.038288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 07:23:22.038419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 07:23:22.038451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 07:23:22.038632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 07:23:22.038664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 07:23:22.038795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 07:23:22.038827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 07:23:22.039014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 07:23:22.039046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 07:23:22.039172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 07:23:22.039203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 07:23:22.039313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.721 [2024-11-20 07:23:22.039345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.721 qpair failed and we were unable to recover it. 00:27:17.721 [2024-11-20 07:23:22.039520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 07:23:22.039551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 07:23:22.039668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 07:23:22.039701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 07:23:22.039810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 07:23:22.039842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 07:23:22.039959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 07:23:22.039992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 07:23:22.040232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 07:23:22.040265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 07:23:22.040392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 07:23:22.040423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 07:23:22.040529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 07:23:22.040560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 07:23:22.040674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 07:23:22.040705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 07:23:22.040903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 07:23:22.040941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 07:23:22.041142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 07:23:22.041175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 07:23:22.041361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 07:23:22.041393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 07:23:22.041496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 07:23:22.041529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 07:23:22.041653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 07:23:22.041684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 07:23:22.041811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 07:23:22.041843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 07:23:22.042083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 07:23:22.042116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 07:23:22.042301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 07:23:22.042333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 07:23:22.042505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 07:23:22.042537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 07:23:22.042713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 07:23:22.042745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 07:23:22.042858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 07:23:22.042888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 07:23:22.043008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 07:23:22.043041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 07:23:22.043239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 07:23:22.043271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 07:23:22.043385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 07:23:22.043417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 07:23:22.043598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 07:23:22.043630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 07:23:22.043803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 07:23:22.043835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 07:23:22.044016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 07:23:22.044049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 07:23:22.044158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 07:23:22.044190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 07:23:22.044302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 07:23:22.044333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 07:23:22.044520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 07:23:22.044551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 07:23:22.044658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 07:23:22.044689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 07:23:22.044804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 07:23:22.044835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 07:23:22.044944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.722 [2024-11-20 07:23:22.044987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.722 qpair failed and we were unable to recover it. 00:27:17.722 [2024-11-20 07:23:22.045262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 07:23:22.045294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 07:23:22.045433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 07:23:22.045464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 07:23:22.045576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 07:23:22.045607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 07:23:22.045727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 07:23:22.045759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 07:23:22.045867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 07:23:22.045904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 07:23:22.046048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 07:23:22.046081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 07:23:22.046324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 07:23:22.046356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 07:23:22.046475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 07:23:22.046507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 07:23:22.046641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 07:23:22.046673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 07:23:22.046806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 07:23:22.046839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 07:23:22.047032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 07:23:22.047066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 07:23:22.047260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 07:23:22.047293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 07:23:22.047511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 07:23:22.047543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 07:23:22.047665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 07:23:22.047697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 07:23:22.047882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 07:23:22.047914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 07:23:22.048123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 07:23:22.048156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 07:23:22.048279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 07:23:22.048311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 07:23:22.048446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 07:23:22.048478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 07:23:22.048604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 07:23:22.048636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 07:23:22.048758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 07:23:22.048790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 07:23:22.048918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 07:23:22.048957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 07:23:22.049075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 07:23:22.049106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 07:23:22.049284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 07:23:22.049316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 07:23:22.049423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 07:23:22.049454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 07:23:22.049637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 07:23:22.049669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 07:23:22.049772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 07:23:22.049804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.723 [2024-11-20 07:23:22.050001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.723 [2024-11-20 07:23:22.050034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.723 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 07:23:22.050220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 07:23:22.050252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 07:23:22.050442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 07:23:22.050473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 07:23:22.050593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 07:23:22.050625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 07:23:22.050809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 07:23:22.050840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 07:23:22.050969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 07:23:22.051002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 07:23:22.051149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 07:23:22.051181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 07:23:22.051353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 07:23:22.051385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 07:23:22.051510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 07:23:22.051542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 07:23:22.051723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 07:23:22.051755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 07:23:22.051864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 07:23:22.051896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 07:23:22.052047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 07:23:22.052080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 07:23:22.052254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 07:23:22.052285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 07:23:22.052404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 07:23:22.052437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 07:23:22.052550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 07:23:22.052581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 07:23:22.052712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 07:23:22.052745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 07:23:22.052855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 07:23:22.052886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 07:23:22.053009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 07:23:22.053042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 07:23:22.053220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 07:23:22.053251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 07:23:22.053425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 07:23:22.053496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 07:23:22.053703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 07:23:22.053738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 07:23:22.053858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 07:23:22.053891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 07:23:22.054088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 07:23:22.054123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 07:23:22.054243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 07:23:22.054275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 07:23:22.054409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 07:23:22.054441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 07:23:22.054724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 07:23:22.054755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 07:23:22.054932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 07:23:22.054976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 07:23:22.055084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 07:23:22.055115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 07:23:22.055240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 07:23:22.055272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 07:23:22.055401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 07:23:22.055431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 07:23:22.055611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 07:23:22.055643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 07:23:22.055817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.724 [2024-11-20 07:23:22.055848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.724 qpair failed and we were unable to recover it. 00:27:17.724 [2024-11-20 07:23:22.055960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 07:23:22.056003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 07:23:22.056144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 07:23:22.056175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 07:23:22.056290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 07:23:22.056321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 07:23:22.056441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 07:23:22.056472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 07:23:22.056577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 07:23:22.056609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 07:23:22.056733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 07:23:22.056764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 07:23:22.056895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 07:23:22.056927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 07:23:22.057045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 07:23:22.057076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 07:23:22.057193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 07:23:22.057225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 07:23:22.057335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 07:23:22.057366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 07:23:22.057486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 07:23:22.057518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 07:23:22.057715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 07:23:22.057746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 07:23:22.057934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 07:23:22.057981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 07:23:22.058085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 07:23:22.058116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 07:23:22.058249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 07:23:22.058280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 07:23:22.058401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 07:23:22.058432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 07:23:22.058617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 07:23:22.058649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 07:23:22.058885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 07:23:22.058916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 07:23:22.059108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 07:23:22.059141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 07:23:22.059329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 07:23:22.059360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 07:23:22.059487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 07:23:22.059518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 07:23:22.059612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 07:23:22.059643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 07:23:22.059780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 07:23:22.059811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 07:23:22.059934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 07:23:22.059976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 07:23:22.060220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 07:23:22.060251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 07:23:22.060368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 07:23:22.060399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 07:23:22.060522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 07:23:22.060553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 07:23:22.060744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 07:23:22.060780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 07:23:22.060891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 07:23:22.060922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 07:23:22.061121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 07:23:22.061154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 07:23:22.061281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 07:23:22.061313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.725 [2024-11-20 07:23:22.061429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.725 [2024-11-20 07:23:22.061460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.725 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 07:23:22.061649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 07:23:22.061680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 07:23:22.061855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 07:23:22.061886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 07:23:22.062025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 07:23:22.062058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 07:23:22.062240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 07:23:22.062271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 07:23:22.062421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 07:23:22.062453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 07:23:22.062570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 07:23:22.062600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 07:23:22.062712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 07:23:22.062744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 07:23:22.062849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 07:23:22.062881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 07:23:22.063072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 07:23:22.063106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 07:23:22.063237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 07:23:22.063268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 07:23:22.063387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 07:23:22.063419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 07:23:22.063541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 07:23:22.063572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 07:23:22.063679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 07:23:22.063712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 07:23:22.063889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 07:23:22.063922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 07:23:22.064104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 07:23:22.064135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 07:23:22.064376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 07:23:22.064408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 07:23:22.064578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 07:23:22.064609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 07:23:22.064800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 07:23:22.064832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 07:23:22.064938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 07:23:22.064977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 07:23:22.065164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 07:23:22.065195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 07:23:22.065378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 07:23:22.065408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 07:23:22.065531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 07:23:22.065562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 07:23:22.065777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 07:23:22.065808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 07:23:22.065930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 07:23:22.065968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 07:23:22.066074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 07:23:22.066105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 07:23:22.066223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 07:23:22.066254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 07:23:22.066433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 07:23:22.066464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 07:23:22.066636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 07:23:22.066668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 07:23:22.066781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 07:23:22.066813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 07:23:22.067053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 07:23:22.067087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 07:23:22.067203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.726 [2024-11-20 07:23:22.067234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.726 qpair failed and we were unable to recover it. 00:27:17.726 [2024-11-20 07:23:22.067350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 07:23:22.067381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 07:23:22.067561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 07:23:22.067591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 07:23:22.067720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 07:23:22.067751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 07:23:22.067963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 07:23:22.067995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 07:23:22.068108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 07:23:22.068146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 07:23:22.068253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 07:23:22.068283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 07:23:22.068397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 07:23:22.068429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 07:23:22.068547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 07:23:22.068579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 07:23:22.068710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 07:23:22.068741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 07:23:22.068912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 07:23:22.068943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 07:23:22.069060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 07:23:22.069091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 07:23:22.069289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 07:23:22.069320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 07:23:22.069560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 07:23:22.069591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 07:23:22.069781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 07:23:22.069812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 07:23:22.070025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 07:23:22.070057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 07:23:22.070172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 07:23:22.070204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 07:23:22.070322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 07:23:22.070353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 07:23:22.070479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 07:23:22.070511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 07:23:22.070716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 07:23:22.070749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 07:23:22.070939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 07:23:22.070980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 07:23:22.071088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 07:23:22.071119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 07:23:22.071252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 07:23:22.071284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 07:23:22.071410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 07:23:22.071441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 07:23:22.071559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 07:23:22.071589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 07:23:22.071705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 07:23:22.071737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 07:23:22.071868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 07:23:22.071900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 07:23:22.072012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 07:23:22.072045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 07:23:22.072178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 07:23:22.072209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 07:23:22.072326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 07:23:22.072355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 07:23:22.072566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 07:23:22.072598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 07:23:22.072782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.727 [2024-11-20 07:23:22.072812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.727 qpair failed and we were unable to recover it. 00:27:17.727 [2024-11-20 07:23:22.073015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 07:23:22.073048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 07:23:22.073287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 07:23:22.073319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 07:23:22.073498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 07:23:22.073530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 07:23:22.073633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 07:23:22.073663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 07:23:22.073784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 07:23:22.073816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 07:23:22.073994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 07:23:22.074028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 07:23:22.074201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 07:23:22.074235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 07:23:22.074351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 07:23:22.074383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 07:23:22.074487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 07:23:22.074518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 07:23:22.074647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 07:23:22.074678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 07:23:22.074784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 07:23:22.074815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 07:23:22.074937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 07:23:22.074978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 07:23:22.075096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 07:23:22.075128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 07:23:22.075237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 07:23:22.075273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 07:23:22.075384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 07:23:22.075414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 07:23:22.075535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 07:23:22.075566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 07:23:22.075669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 07:23:22.075701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 07:23:22.075819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 07:23:22.075850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 07:23:22.075990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 07:23:22.076024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 07:23:22.076231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 07:23:22.076262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 07:23:22.076378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 07:23:22.076410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 07:23:22.076522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 07:23:22.076555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 07:23:22.076781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 07:23:22.076813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 07:23:22.076960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 07:23:22.076993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 07:23:22.077112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 07:23:22.077144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 07:23:22.077273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 07:23:22.077304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 07:23:22.077431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 07:23:22.077462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.728 qpair failed and we were unable to recover it. 00:27:17.728 [2024-11-20 07:23:22.077640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.728 [2024-11-20 07:23:22.077671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 07:23:22.077874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 07:23:22.077906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 07:23:22.078113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 07:23:22.078146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 07:23:22.078333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 07:23:22.078364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 07:23:22.078486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 07:23:22.078517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 07:23:22.078635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 07:23:22.078667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 07:23:22.078848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 07:23:22.078880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 07:23:22.079003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 07:23:22.079035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 07:23:22.079221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 07:23:22.079254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 07:23:22.079368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 07:23:22.079400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 07:23:22.079578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 07:23:22.079609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 07:23:22.079716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 07:23:22.079748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 07:23:22.079871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 07:23:22.079903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 07:23:22.080141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 07:23:22.080174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 07:23:22.080347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 07:23:22.080379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 07:23:22.080484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 07:23:22.080516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 07:23:22.080637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 07:23:22.080669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 07:23:22.080808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 07:23:22.080839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 07:23:22.081012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 07:23:22.081046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 07:23:22.081162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 07:23:22.081195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 07:23:22.081368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 07:23:22.081399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 07:23:22.081533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 07:23:22.081566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 07:23:22.081688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 07:23:22.081720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 07:23:22.081856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 07:23:22.081888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 07:23:22.082093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 07:23:22.082126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 07:23:22.082252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 07:23:22.082285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 07:23:22.082417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 07:23:22.082454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 07:23:22.082578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 07:23:22.082610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 07:23:22.082846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 07:23:22.082877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 07:23:22.083059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 07:23:22.083093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 07:23:22.083199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 07:23:22.083230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 07:23:22.083413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.729 [2024-11-20 07:23:22.083444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.729 qpair failed and we were unable to recover it. 00:27:17.729 [2024-11-20 07:23:22.083574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 07:23:22.083605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 07:23:22.083733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 07:23:22.083764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 07:23:22.083994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 07:23:22.084028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 07:23:22.084164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 07:23:22.084195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 07:23:22.084310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 07:23:22.084341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 07:23:22.084475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 07:23:22.084507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 07:23:22.084679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 07:23:22.084710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 07:23:22.084820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 07:23:22.084851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 07:23:22.085041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 07:23:22.085074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 07:23:22.085249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 07:23:22.085281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 07:23:22.085461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 07:23:22.085492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 07:23:22.085592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 07:23:22.085624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 07:23:22.085732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 07:23:22.085763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 07:23:22.085866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 07:23:22.085898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 07:23:22.086149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 07:23:22.086182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 07:23:22.086305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 07:23:22.086337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 07:23:22.086513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 07:23:22.086544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 07:23:22.086650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 07:23:22.086682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 07:23:22.086963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 07:23:22.086996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 07:23:22.087103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 07:23:22.087135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 07:23:22.087419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 07:23:22.087450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 07:23:22.087570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 07:23:22.087600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 07:23:22.087770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 07:23:22.087801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 07:23:22.087923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 07:23:22.087967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 07:23:22.088142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 07:23:22.088173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 07:23:22.088310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 07:23:22.088342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 07:23:22.088445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 07:23:22.088477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 07:23:22.088601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 07:23:22.088633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 07:23:22.088822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 07:23:22.088853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 07:23:22.088974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 07:23:22.089008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 07:23:22.089131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.730 [2024-11-20 07:23:22.089162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.730 qpair failed and we were unable to recover it. 00:27:17.730 [2024-11-20 07:23:22.089281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 07:23:22.089313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 07:23:22.089517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 07:23:22.089549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 07:23:22.089735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 07:23:22.089766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 07:23:22.089874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 07:23:22.089911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 07:23:22.090097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 07:23:22.090130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 07:23:22.090241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 07:23:22.090272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 07:23:22.090403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 07:23:22.090435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 07:23:22.090610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 07:23:22.090642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 07:23:22.090816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 07:23:22.090848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 07:23:22.091088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 07:23:22.091121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 07:23:22.091240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 07:23:22.091270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 07:23:22.091393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 07:23:22.091424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 07:23:22.091535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 07:23:22.091566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 07:23:22.091680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 07:23:22.091712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 07:23:22.091818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 07:23:22.091851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 07:23:22.091994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 07:23:22.092027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 07:23:22.092225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 07:23:22.092256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 07:23:22.092436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 07:23:22.092468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 07:23:22.092575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 07:23:22.092606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 07:23:22.092736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 07:23:22.092766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 07:23:22.092899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 07:23:22.092931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 07:23:22.093075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 07:23:22.093108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 07:23:22.093308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 07:23:22.093338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 07:23:22.093443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 07:23:22.093474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 07:23:22.093653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 07:23:22.093685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 07:23:22.093857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 07:23:22.093887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 07:23:22.094008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 07:23:22.094040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 07:23:22.094177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 07:23:22.094208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 07:23:22.094324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 07:23:22.094355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 07:23:22.094524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 07:23:22.094555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 07:23:22.094680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 07:23:22.094712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.731 qpair failed and we were unable to recover it. 00:27:17.731 [2024-11-20 07:23:22.094892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.731 [2024-11-20 07:23:22.094923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 07:23:22.095134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 07:23:22.095166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 07:23:22.095385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 07:23:22.095417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 07:23:22.095540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 07:23:22.095571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 07:23:22.095692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 07:23:22.095723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 07:23:22.095927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 07:23:22.095968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 07:23:22.096181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 07:23:22.096213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 07:23:22.096348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 07:23:22.096380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 07:23:22.096495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 07:23:22.096526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 07:23:22.096644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 07:23:22.096675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 07:23:22.096860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 07:23:22.096892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 07:23:22.097011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 07:23:22.097044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 07:23:22.097146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 07:23:22.097185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 07:23:22.097295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 07:23:22.097326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 07:23:22.097446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 07:23:22.097477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 07:23:22.097692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 07:23:22.097724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 07:23:22.097897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 07:23:22.097929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 07:23:22.098041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 07:23:22.098072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 07:23:22.098256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 07:23:22.098286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 07:23:22.098489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 07:23:22.098520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 07:23:22.098626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 07:23:22.098657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 07:23:22.098831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 07:23:22.098862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 07:23:22.099051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 07:23:22.099084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 07:23:22.099205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 07:23:22.099237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 07:23:22.099353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 07:23:22.099383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 07:23:22.099584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 07:23:22.099616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 07:23:22.099736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 07:23:22.099768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 07:23:22.099879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 07:23:22.099910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 07:23:22.100050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 07:23:22.100083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 07:23:22.100188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 07:23:22.100219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 07:23:22.100347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 07:23:22.100378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.732 [2024-11-20 07:23:22.100502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.732 [2024-11-20 07:23:22.100534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.732 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 07:23:22.100707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 07:23:22.100738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 07:23:22.100853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 07:23:22.100885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 07:23:22.100992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 07:23:22.101024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 07:23:22.101143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 07:23:22.101175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 07:23:22.101372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 07:23:22.101403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 07:23:22.101525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 07:23:22.101557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 07:23:22.101678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 07:23:22.101709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 07:23:22.101890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 07:23:22.101921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 07:23:22.102066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 07:23:22.102099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 07:23:22.102210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 07:23:22.102241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 07:23:22.102435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 07:23:22.102466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 07:23:22.102644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 07:23:22.102676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 07:23:22.102788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 07:23:22.102820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 07:23:22.102958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 07:23:22.102991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 07:23:22.103117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 07:23:22.103149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 07:23:22.103327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 07:23:22.103359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 07:23:22.103547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 07:23:22.103578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 07:23:22.103826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 07:23:22.103857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 07:23:22.103993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 07:23:22.104028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 07:23:22.104217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 07:23:22.104248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 07:23:22.104372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 07:23:22.104409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 07:23:22.104582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 07:23:22.104613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 07:23:22.104732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 07:23:22.104762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 07:23:22.104873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 07:23:22.104904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 07:23:22.105025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 07:23:22.105058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 07:23:22.105267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 07:23:22.105299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 07:23:22.105482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 07:23:22.105513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 07:23:22.105707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 07:23:22.105739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 07:23:22.105979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 07:23:22.106013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 07:23:22.106146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 07:23:22.106178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 07:23:22.106362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.733 [2024-11-20 07:23:22.106394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.733 qpair failed and we were unable to recover it. 00:27:17.733 [2024-11-20 07:23:22.106519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 07:23:22.106551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 07:23:22.106678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 07:23:22.106709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 07:23:22.106822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 07:23:22.106853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 07:23:22.106995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 07:23:22.107029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 07:23:22.107161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 07:23:22.107192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 07:23:22.107307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 07:23:22.107338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 07:23:22.107509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 07:23:22.107540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 07:23:22.107712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 07:23:22.107743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 07:23:22.107861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 07:23:22.107893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 07:23:22.108013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 07:23:22.108045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 07:23:22.108149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 07:23:22.108180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 07:23:22.108410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 07:23:22.108442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 07:23:22.108653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 07:23:22.108684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 07:23:22.108867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 07:23:22.108899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 07:23:22.109040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 07:23:22.109073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 07:23:22.109179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 07:23:22.109210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 07:23:22.109465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 07:23:22.109537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 07:23:22.109677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 07:23:22.109713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 07:23:22.109852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 07:23:22.109885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 07:23:22.110100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 07:23:22.110135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 07:23:22.110254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 07:23:22.110287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 07:23:22.110415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 07:23:22.110447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 07:23:22.110636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 07:23:22.110668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 07:23:22.110921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 07:23:22.110964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 07:23:22.111109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 07:23:22.111142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 07:23:22.111335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 07:23:22.111368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 07:23:22.111555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 07:23:22.111587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 07:23:22.111725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 07:23:22.111757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 07:23:22.111878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 07:23:22.111911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.734 [2024-11-20 07:23:22.112053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.734 [2024-11-20 07:23:22.112096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.734 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 07:23:22.112224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 07:23:22.112256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 07:23:22.112439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 07:23:22.112470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 07:23:22.112582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 07:23:22.112614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 07:23:22.112803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 07:23:22.112835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 07:23:22.112971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 07:23:22.113005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 07:23:22.113109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 07:23:22.113141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 07:23:22.113325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 07:23:22.113357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 07:23:22.113532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 07:23:22.113564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 07:23:22.113738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 07:23:22.113770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 07:23:22.113956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 07:23:22.113990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 07:23:22.114193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 07:23:22.114226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 07:23:22.114339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 07:23:22.114372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 07:23:22.114486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 07:23:22.114519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 07:23:22.114712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 07:23:22.114743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 07:23:22.114853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 07:23:22.114885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 07:23:22.115069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 07:23:22.115103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 07:23:22.115232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 07:23:22.115263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 07:23:22.115453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 07:23:22.115484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 07:23:22.115596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 07:23:22.115629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 07:23:22.115760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 07:23:22.115791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 07:23:22.115915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 07:23:22.115958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 07:23:22.116084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 07:23:22.116116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 07:23:22.116308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 07:23:22.116340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 07:23:22.116445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 07:23:22.116476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 07:23:22.116601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 07:23:22.116635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 07:23:22.116763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 07:23:22.116794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 07:23:22.116961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 07:23:22.117031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 07:23:22.117299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 07:23:22.117334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 07:23:22.117452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 07:23:22.117485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 07:23:22.117662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 07:23:22.117693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 07:23:22.117802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 07:23:22.117833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 07:23:22.118008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 07:23:22.118043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 07:23:22.118236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 07:23:22.118267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 07:23:22.118412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.735 [2024-11-20 07:23:22.118444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.735 qpair failed and we were unable to recover it. 00:27:17.735 [2024-11-20 07:23:22.118644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 07:23:22.118675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 07:23:22.118780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 07:23:22.118811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 07:23:22.118993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 07:23:22.119024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 07:23:22.119216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 07:23:22.119248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 07:23:22.119365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 07:23:22.119397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 07:23:22.119577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 07:23:22.119618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 07:23:22.119736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 07:23:22.119767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 07:23:22.119881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 07:23:22.119912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 07:23:22.120056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 07:23:22.120089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 07:23:22.120222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 07:23:22.120254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 07:23:22.120429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 07:23:22.120460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 07:23:22.120583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 07:23:22.120614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 07:23:22.120799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 07:23:22.120831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 07:23:22.121028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 07:23:22.121061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 07:23:22.121244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 07:23:22.121276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 07:23:22.121455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 07:23:22.121486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 07:23:22.121614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 07:23:22.121645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 07:23:22.121758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 07:23:22.121790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 07:23:22.121905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 07:23:22.121936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 07:23:22.122083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 07:23:22.122116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 07:23:22.122301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 07:23:22.122333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 07:23:22.122448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 07:23:22.122479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 07:23:22.122608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 07:23:22.122639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 07:23:22.122824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 07:23:22.122855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 07:23:22.123053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 07:23:22.123086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 07:23:22.123194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 07:23:22.123224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 07:23:22.123349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 07:23:22.123381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 07:23:22.123491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 07:23:22.123523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 07:23:22.123693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 07:23:22.123725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 07:23:22.123830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 07:23:22.123860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 07:23:22.123969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 07:23:22.124001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 07:23:22.124125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 07:23:22.124157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 07:23:22.124315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 07:23:22.124387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 07:23:22.124523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.736 [2024-11-20 07:23:22.124559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.736 qpair failed and we were unable to recover it. 00:27:17.736 [2024-11-20 07:23:22.124671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 07:23:22.124705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 07:23:22.124880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 07:23:22.124912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 07:23:22.125040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 07:23:22.125074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 07:23:22.125264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 07:23:22.125296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 07:23:22.125408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 07:23:22.125439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 07:23:22.125631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 07:23:22.125662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 07:23:22.125844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 07:23:22.125875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 07:23:22.126009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 07:23:22.126043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 07:23:22.126285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 07:23:22.126317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 07:23:22.126508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 07:23:22.126540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 07:23:22.126647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 07:23:22.126678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 07:23:22.126809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 07:23:22.126840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 07:23:22.126972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 07:23:22.127004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 07:23:22.127203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 07:23:22.127235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 07:23:22.127343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 07:23:22.127374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 07:23:22.127487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 07:23:22.127518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 07:23:22.127635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 07:23:22.127666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 07:23:22.127840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 07:23:22.127871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 07:23:22.128080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 07:23:22.128112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 07:23:22.128222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 07:23:22.128254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 07:23:22.128472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 07:23:22.128504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 07:23:22.128679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 07:23:22.128710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 07:23:22.128816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 07:23:22.128848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 07:23:22.129044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 07:23:22.129089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 07:23:22.129218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 07:23:22.129249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 07:23:22.129373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 07:23:22.129411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 07:23:22.129657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 07:23:22.129689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1351002 Killed "${NVMF_APP[@]}" "$@" 00:27:17.737 [2024-11-20 07:23:22.129864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 07:23:22.129895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 07:23:22.130022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 07:23:22.130055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 07:23:22.130240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 07:23:22.130273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 07:23:22.130465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 07:23:22.130497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.737 07:23:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 07:23:22.130681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 07:23:22.130712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 07:23:22.130821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 07:23:22.130854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 07:23:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:17.737 [2024-11-20 07:23:22.130984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 07:23:22.131016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.737 [2024-11-20 07:23:22.131136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.737 [2024-11-20 07:23:22.131168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.737 qpair failed and we were unable to recover it. 00:27:17.738 07:23:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:17.738 [2024-11-20 07:23:22.131347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 07:23:22.131380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 07:23:22.131485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 07:23:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:17.738 [2024-11-20 07:23:22.131516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 07:23:22.131699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 07:23:22.131731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 07:23:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.738 [2024-11-20 07:23:22.131907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 07:23:22.131938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 07:23:22.132048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 07:23:22.132081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 07:23:22.132194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 07:23:22.132225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 07:23:22.132412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 07:23:22.132444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 07:23:22.132613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 07:23:22.132645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 07:23:22.132759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 07:23:22.132791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 07:23:22.132987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 07:23:22.133018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 07:23:22.133143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 07:23:22.133175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 07:23:22.133295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 07:23:22.133326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 07:23:22.133512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 07:23:22.133543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 07:23:22.133720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 07:23:22.133751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 07:23:22.133872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 07:23:22.133904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 07:23:22.134033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 07:23:22.134065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 07:23:22.134187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 07:23:22.134218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 07:23:22.134409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 07:23:22.134440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 07:23:22.134554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 07:23:22.134584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 07:23:22.134770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 07:23:22.134800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 07:23:22.134922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 07:23:22.134966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 07:23:22.135071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 07:23:22.135102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 07:23:22.135300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 07:23:22.135331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 07:23:22.135506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 07:23:22.135537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 07:23:22.135649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 07:23:22.135678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 07:23:22.135801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 07:23:22.135832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 07:23:22.135969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 07:23:22.136002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 07:23:22.136145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 07:23:22.136177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 07:23:22.136286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 07:23:22.136324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 07:23:22.136452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 07:23:22.136483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 07:23:22.136606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 07:23:22.136637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 07:23:22.136740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 07:23:22.136771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 07:23:22.136924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 07:23:22.136965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 07:23:22.137148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 07:23:22.137179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 07:23:22.137372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 07:23:22.137403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.738 [2024-11-20 07:23:22.137550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.738 [2024-11-20 07:23:22.137580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.738 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 07:23:22.137706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 07:23:22.137738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 07:23:22.137846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 07:23:22.137878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 07:23:22.138060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 07:23:22.138095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 07:23:22.138213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 07:23:22.138245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 07:23:22.138365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 07:23:22.138397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 07:23:22.138569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 07:23:22.138601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 07:23:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1351745 00:27:17.739 [2024-11-20 07:23:22.138821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 07:23:22.138853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 07:23:22.138966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 07:23:22.138999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 07:23:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1351745 00:27:17.739 07:23:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:17.739 [2024-11-20 07:23:22.139173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 07:23:22.139206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 07:23:22.139396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 07:23:22.139428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 07:23:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 1351745 ']' 00:27:17.739 [2024-11-20 07:23:22.139619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 07:23:22.139650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 07:23:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:17.739 [2024-11-20 07:23:22.139841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 07:23:22.139874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 07:23:22.140007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 07:23:22.140041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.739 07:23:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 07:23:22.140227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 07:23:22.140258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 07:23:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:17.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:17.739 [2024-11-20 07:23:22.140387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 07:23:22.140420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 07:23:22.140615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 07:23:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:17.739 [2024-11-20 07:23:22.140652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 07:23:22.140831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 07:23:22.140863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 07:23:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:17.739 [2024-11-20 07:23:22.141041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 07:23:22.141073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 07:23:22.141197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 07:23:22.141228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 07:23:22.141365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 07:23:22.141397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 07:23:22.141592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 07:23:22.141623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 07:23:22.141745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 07:23:22.141776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 07:23:22.141903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 07:23:22.141935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 07:23:22.142073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 07:23:22.142105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 07:23:22.142356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 07:23:22.142388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 07:23:22.142500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 07:23:22.142531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.739 [2024-11-20 07:23:22.142727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.739 [2024-11-20 07:23:22.142766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.739 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 07:23:22.142885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 07:23:22.142916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 07:23:22.143100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 07:23:22.143172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 07:23:22.143417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 07:23:22.143454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 07:23:22.143571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 07:23:22.143603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 07:23:22.143789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 07:23:22.143821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 07:23:22.143935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 07:23:22.143985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 07:23:22.144182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 07:23:22.144214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 07:23:22.144401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 07:23:22.144432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 07:23:22.144621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 07:23:22.144652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 07:23:22.144760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 07:23:22.144792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 07:23:22.144927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 07:23:22.144970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 07:23:22.145085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 07:23:22.145117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 07:23:22.145321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 07:23:22.145353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 07:23:22.145470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 07:23:22.145501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 07:23:22.145622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 07:23:22.145662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 07:23:22.145778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 07:23:22.145810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 07:23:22.145941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 07:23:22.145989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 07:23:22.146122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 07:23:22.146153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 07:23:22.146273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 07:23:22.146304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 07:23:22.146411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 07:23:22.146443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 07:23:22.146564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 07:23:22.146594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 07:23:22.146769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 07:23:22.146801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 07:23:22.146981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 07:23:22.147015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 07:23:22.147120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 07:23:22.147153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 07:23:22.147346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 07:23:22.147377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 07:23:22.147508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 07:23:22.147540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 07:23:22.147787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 07:23:22.147819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 07:23:22.147945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 07:23:22.147986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 07:23:22.148177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 07:23:22.148209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 07:23:22.148326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 07:23:22.148357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 07:23:22.148534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 07:23:22.148566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 07:23:22.148683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 07:23:22.148715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 07:23:22.148941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 07:23:22.148982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 07:23:22.149168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 07:23:22.149201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 07:23:22.149337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 07:23:22.149369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 07:23:22.149563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 07:23:22.149594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.740 qpair failed and we were unable to recover it. 00:27:17.740 [2024-11-20 07:23:22.149707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.740 [2024-11-20 07:23:22.149739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 07:23:22.149954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 07:23:22.149987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 07:23:22.150111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 07:23:22.150144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 07:23:22.150275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 07:23:22.150308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 07:23:22.150424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 07:23:22.150456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 07:23:22.150679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 07:23:22.150751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 07:23:22.150896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 07:23:22.150932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 07:23:22.151139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 07:23:22.151173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 07:23:22.151401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 07:23:22.151434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 07:23:22.151550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 07:23:22.151582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 07:23:22.151712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 07:23:22.151743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 07:23:22.151924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 07:23:22.151964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 07:23:22.152164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 07:23:22.152196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 07:23:22.152375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 07:23:22.152407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 07:23:22.152600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 07:23:22.152631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 07:23:22.152808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 07:23:22.152841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 07:23:22.152972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 07:23:22.153005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 07:23:22.153186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 07:23:22.153219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 07:23:22.153325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 07:23:22.153366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 07:23:22.153544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 07:23:22.153575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 07:23:22.153756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 07:23:22.153787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 07:23:22.153910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 07:23:22.153941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 07:23:22.154128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 07:23:22.154160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 07:23:22.154404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 07:23:22.154436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 07:23:22.154566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 07:23:22.154597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 07:23:22.154718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 07:23:22.154750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 07:23:22.154924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 07:23:22.154971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 07:23:22.155144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 07:23:22.155175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 07:23:22.155287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 07:23:22.155318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 07:23:22.155429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 07:23:22.155461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 07:23:22.155587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 07:23:22.155617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 07:23:22.155734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 07:23:22.155766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 07:23:22.155897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 07:23:22.155928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 07:23:22.156119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 07:23:22.156153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 07:23:22.156283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 07:23:22.156314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 07:23:22.156443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 07:23:22.156474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.741 [2024-11-20 07:23:22.156590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.741 [2024-11-20 07:23:22.156622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.741 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 07:23:22.156732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 07:23:22.156763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 07:23:22.157053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 07:23:22.157087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 07:23:22.157258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 07:23:22.157289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 07:23:22.157423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 07:23:22.157455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 07:23:22.157583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 07:23:22.157614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 07:23:22.157788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 07:23:22.157819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 07:23:22.158004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 07:23:22.158038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 07:23:22.158231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 07:23:22.158262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 07:23:22.158514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 07:23:22.158585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 07:23:22.158791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 07:23:22.158828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 07:23:22.159006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 07:23:22.159038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 07:23:22.159215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 07:23:22.159247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 07:23:22.159371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 07:23:22.159403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 07:23:22.159578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 07:23:22.159608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 07:23:22.159782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 07:23:22.159813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 07:23:22.159934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 07:23:22.159977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 07:23:22.160083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 07:23:22.160114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 07:23:22.160247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 07:23:22.160278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 07:23:22.160398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 07:23:22.160430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 07:23:22.160664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 07:23:22.160695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 07:23:22.160895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 07:23:22.160926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 07:23:22.161054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 07:23:22.161086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 07:23:22.161339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 07:23:22.161371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 07:23:22.161540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 07:23:22.161572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 07:23:22.161702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 07:23:22.161734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 07:23:22.161973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 07:23:22.162007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 07:23:22.162193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 07:23:22.162225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 07:23:22.162341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 07:23:22.162372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 07:23:22.162479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 07:23:22.162510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 07:23:22.162695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 07:23:22.162727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 07:23:22.162840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 07:23:22.162871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 07:23:22.162997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 07:23:22.163029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 07:23:22.163153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 07:23:22.163184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 07:23:22.163376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 07:23:22.163406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 07:23:22.163607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 07:23:22.163638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 07:23:22.163814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 07:23:22.163851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 07:23:22.163975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 07:23:22.164007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 07:23:22.164184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 07:23:22.164215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 07:23:22.164326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.742 [2024-11-20 07:23:22.164356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.742 qpair failed and we were unable to recover it. 00:27:17.742 [2024-11-20 07:23:22.164555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.743 [2024-11-20 07:23:22.164586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-20 07:23:22.164707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.743 [2024-11-20 07:23:22.164738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-20 07:23:22.164859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.743 [2024-11-20 07:23:22.164890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-20 07:23:22.165070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.743 [2024-11-20 07:23:22.165103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-20 07:23:22.165231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.743 [2024-11-20 07:23:22.165263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-20 07:23:22.165438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.743 [2024-11-20 07:23:22.165469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-20 07:23:22.165584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.743 [2024-11-20 07:23:22.165615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-20 07:23:22.165802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.743 [2024-11-20 07:23:22.165833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-20 07:23:22.165981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.743 [2024-11-20 07:23:22.166035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-20 07:23:22.166168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.743 [2024-11-20 07:23:22.166198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-20 07:23:22.166330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.743 [2024-11-20 07:23:22.166362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-20 07:23:22.166484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.743 [2024-11-20 07:23:22.166517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-20 07:23:22.166703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.743 [2024-11-20 07:23:22.166735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-20 07:23:22.166946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.743 [2024-11-20 07:23:22.166987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-20 07:23:22.167095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.743 [2024-11-20 07:23:22.167127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-20 07:23:22.167399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.743 [2024-11-20 07:23:22.167431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-20 07:23:22.167621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.743 [2024-11-20 07:23:22.167654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-20 07:23:22.167853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.743 [2024-11-20 07:23:22.167885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-20 07:23:22.168119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.743 [2024-11-20 07:23:22.168153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-20 07:23:22.168338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.743 [2024-11-20 07:23:22.168370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-20 07:23:22.168569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.743 [2024-11-20 07:23:22.168601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-20 07:23:22.168720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.743 [2024-11-20 07:23:22.168751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-20 07:23:22.168991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.743 [2024-11-20 07:23:22.169025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-20 07:23:22.169147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.743 [2024-11-20 07:23:22.169186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-20 07:23:22.169291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.743 [2024-11-20 07:23:22.169322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-20 07:23:22.169497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.743 [2024-11-20 07:23:22.169528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-20 07:23:22.169733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.743 [2024-11-20 07:23:22.169765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-20 07:23:22.169898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.743 [2024-11-20 07:23:22.169929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-20 07:23:22.170130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.743 [2024-11-20 07:23:22.170161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-20 07:23:22.170284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.743 [2024-11-20 07:23:22.170315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-20 07:23:22.170443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.743 [2024-11-20 07:23:22.170474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-20 07:23:22.170598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.743 [2024-11-20 07:23:22.170630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-20 07:23:22.170868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.743 [2024-11-20 07:23:22.170900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-20 07:23:22.171048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.743 [2024-11-20 07:23:22.171082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-20 07:23:22.171197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.743 [2024-11-20 07:23:22.171228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.743 qpair failed and we were unable to recover it. 00:27:17.743 [2024-11-20 07:23:22.171362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.743 [2024-11-20 07:23:22.171393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.744 qpair failed and we were unable to recover it. 00:27:17.744 [2024-11-20 07:23:22.171500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.744 [2024-11-20 07:23:22.171532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.744 qpair failed and we were unable to recover it. 00:27:17.744 [2024-11-20 07:23:22.171656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.744 [2024-11-20 07:23:22.171686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.744 qpair failed and we were unable to recover it. 00:27:17.744 [2024-11-20 07:23:22.171817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.744 [2024-11-20 07:23:22.171848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.744 qpair failed and we were unable to recover it. 00:27:17.744 [2024-11-20 07:23:22.172026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.744 [2024-11-20 07:23:22.172059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.744 qpair failed and we were unable to recover it. 00:27:17.744 [2024-11-20 07:23:22.172166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.744 [2024-11-20 07:23:22.172198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.744 qpair failed and we were unable to recover it. 00:27:17.744 [2024-11-20 07:23:22.172368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.744 [2024-11-20 07:23:22.172400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.744 qpair failed and we were unable to recover it. 00:27:17.744 [2024-11-20 07:23:22.172596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.744 [2024-11-20 07:23:22.172628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.744 qpair failed and we were unable to recover it. 00:27:17.744 [2024-11-20 07:23:22.172867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.744 [2024-11-20 07:23:22.172898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.744 qpair failed and we were unable to recover it. 00:27:17.744 [2024-11-20 07:23:22.173094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.744 [2024-11-20 07:23:22.173127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.744 qpair failed and we were unable to recover it. 00:27:17.744 [2024-11-20 07:23:22.173310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.744 [2024-11-20 07:23:22.173341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.744 qpair failed and we were unable to recover it. 00:27:17.744 [2024-11-20 07:23:22.173475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.744 [2024-11-20 07:23:22.173506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.744 qpair failed and we were unable to recover it. 00:27:17.744 [2024-11-20 07:23:22.173692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.744 [2024-11-20 07:23:22.173724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.744 qpair failed and we were unable to recover it. 00:27:17.744 [2024-11-20 07:23:22.173907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.744 [2024-11-20 07:23:22.173939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.744 qpair failed and we were unable to recover it. 00:27:17.744 [2024-11-20 07:23:22.174066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.744 [2024-11-20 07:23:22.174098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.744 qpair failed and we were unable to recover it. 00:27:17.744 [2024-11-20 07:23:22.174216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.744 [2024-11-20 07:23:22.174253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.744 qpair failed and we were unable to recover it. 00:27:17.744 [2024-11-20 07:23:22.174463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.744 [2024-11-20 07:23:22.174494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.744 qpair failed and we were unable to recover it. 00:27:17.744 [2024-11-20 07:23:22.174674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.744 [2024-11-20 07:23:22.174704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.744 qpair failed and we were unable to recover it. 00:27:17.744 [2024-11-20 07:23:22.174834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.744 [2024-11-20 07:23:22.174866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.744 qpair failed and we were unable to recover it. 00:27:17.744 [2024-11-20 07:23:22.174988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.744 [2024-11-20 07:23:22.175021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.744 qpair failed and we were unable to recover it. 00:27:17.744 [2024-11-20 07:23:22.175153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.744 [2024-11-20 07:23:22.175184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.744 qpair failed and we were unable to recover it. 00:27:17.744 [2024-11-20 07:23:22.175365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.744 [2024-11-20 07:23:22.175397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.744 qpair failed and we were unable to recover it. 00:27:17.744 [2024-11-20 07:23:22.175571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.744 [2024-11-20 07:23:22.175602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.744 qpair failed and we were unable to recover it. 00:27:17.744 [2024-11-20 07:23:22.175784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.744 [2024-11-20 07:23:22.175816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.744 qpair failed and we were unable to recover it. 00:27:17.744 [2024-11-20 07:23:22.176009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.744 [2024-11-20 07:23:22.176042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.744 qpair failed and we were unable to recover it. 00:27:17.744 [2024-11-20 07:23:22.176167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.744 [2024-11-20 07:23:22.176199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.744 qpair failed and we were unable to recover it. 00:27:17.744 [2024-11-20 07:23:22.176305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.744 [2024-11-20 07:23:22.176336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.744 qpair failed and we were unable to recover it. 00:27:17.744 [2024-11-20 07:23:22.176446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.744 [2024-11-20 07:23:22.176477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.744 qpair failed and we were unable to recover it. 00:27:17.744 [2024-11-20 07:23:22.176651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.744 [2024-11-20 07:23:22.176682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.744 qpair failed and we were unable to recover it. 00:27:17.744 [2024-11-20 07:23:22.176845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.744 [2024-11-20 07:23:22.176917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.744 qpair failed and we were unable to recover it. 00:27:17.744 [2024-11-20 07:23:22.177194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.744 [2024-11-20 07:23:22.177235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.744 qpair failed and we were unable to recover it. 00:27:17.744 [2024-11-20 07:23:22.177354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.744 [2024-11-20 07:23:22.177387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.744 qpair failed and we were unable to recover it. 00:27:17.744 [2024-11-20 07:23:22.177495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.744 [2024-11-20 07:23:22.177527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.744 qpair failed and we were unable to recover it. 00:27:17.744 [2024-11-20 07:23:22.177791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.744 [2024-11-20 07:23:22.177823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.744 qpair failed and we were unable to recover it. 00:27:17.744 [2024-11-20 07:23:22.177944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.744 [2024-11-20 07:23:22.178002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.744 qpair failed and we were unable to recover it. 00:27:17.744 [2024-11-20 07:23:22.178129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.744 [2024-11-20 07:23:22.178160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.744 qpair failed and we were unable to recover it. 00:27:17.744 [2024-11-20 07:23:22.178285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.744 [2024-11-20 07:23:22.178317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.744 qpair failed and we were unable to recover it. 00:27:17.744 [2024-11-20 07:23:22.178439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.745 [2024-11-20 07:23:22.178469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.745 qpair failed and we were unable to recover it. 00:27:17.745 [2024-11-20 07:23:22.178584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.745 [2024-11-20 07:23:22.178615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.745 qpair failed and we were unable to recover it. 00:27:17.745 [2024-11-20 07:23:22.178734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.745 [2024-11-20 07:23:22.178765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.745 qpair failed and we were unable to recover it. 00:27:17.745 [2024-11-20 07:23:22.178955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.745 [2024-11-20 07:23:22.178988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.745 qpair failed and we were unable to recover it. 00:27:17.745 [2024-11-20 07:23:22.179160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.745 [2024-11-20 07:23:22.179190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.745 qpair failed and we were unable to recover it. 00:27:17.745 [2024-11-20 07:23:22.179296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.745 [2024-11-20 07:23:22.179335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.745 qpair failed and we were unable to recover it. 00:27:17.745 [2024-11-20 07:23:22.179445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.745 [2024-11-20 07:23:22.179477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.745 qpair failed and we were unable to recover it. 00:27:17.745 [2024-11-20 07:23:22.179591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.745 [2024-11-20 07:23:22.179622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.745 qpair failed and we were unable to recover it. 00:27:17.745 [2024-11-20 07:23:22.179737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.745 [2024-11-20 07:23:22.179768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.745 qpair failed and we were unable to recover it. 00:27:17.745 [2024-11-20 07:23:22.179894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.745 [2024-11-20 07:23:22.179926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.745 qpair failed and we were unable to recover it. 00:27:17.745 [2024-11-20 07:23:22.180145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.745 [2024-11-20 07:23:22.180176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.745 qpair failed and we were unable to recover it. 00:27:17.745 [2024-11-20 07:23:22.180291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.745 [2024-11-20 07:23:22.180323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.745 qpair failed and we were unable to recover it. 00:27:17.745 [2024-11-20 07:23:22.180458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.745 [2024-11-20 07:23:22.180490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.745 qpair failed and we were unable to recover it. 00:27:17.745 [2024-11-20 07:23:22.180612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.745 [2024-11-20 07:23:22.180642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.745 qpair failed and we were unable to recover it. 00:27:17.745 [2024-11-20 07:23:22.180759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.745 [2024-11-20 07:23:22.180790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.745 qpair failed and we were unable to recover it. 00:27:17.745 [2024-11-20 07:23:22.180920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.745 [2024-11-20 07:23:22.180962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.745 qpair failed and we were unable to recover it. 00:27:17.745 [2024-11-20 07:23:22.181089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.745 [2024-11-20 07:23:22.181121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.745 qpair failed and we were unable to recover it. 00:27:17.745 [2024-11-20 07:23:22.181301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.745 [2024-11-20 07:23:22.181332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.745 qpair failed and we were unable to recover it. 00:27:17.745 [2024-11-20 07:23:22.181455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.745 [2024-11-20 07:23:22.181487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.745 qpair failed and we were unable to recover it. 00:27:17.745 [2024-11-20 07:23:22.181615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.745 [2024-11-20 07:23:22.181647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.745 qpair failed and we were unable to recover it. 00:27:17.745 [2024-11-20 07:23:22.181754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.745 [2024-11-20 07:23:22.181785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.745 qpair failed and we were unable to recover it. 00:27:17.745 [2024-11-20 07:23:22.181972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.745 [2024-11-20 07:23:22.182007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.745 qpair failed and we were unable to recover it. 00:27:17.745 [2024-11-20 07:23:22.182214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.745 [2024-11-20 07:23:22.182246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.745 qpair failed and we were unable to recover it. 00:27:17.745 [2024-11-20 07:23:22.182360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.745 [2024-11-20 07:23:22.182391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.745 qpair failed and we were unable to recover it. 00:27:17.745 [2024-11-20 07:23:22.182496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.745 [2024-11-20 07:23:22.182527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.745 qpair failed and we were unable to recover it. 00:27:17.745 [2024-11-20 07:23:22.182699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.745 [2024-11-20 07:23:22.182730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.745 qpair failed and we were unable to recover it. 00:27:17.745 [2024-11-20 07:23:22.182862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.745 [2024-11-20 07:23:22.182895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.745 qpair failed and we were unable to recover it. 00:27:17.745 [2024-11-20 07:23:22.183090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.745 [2024-11-20 07:23:22.183122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.745 qpair failed and we were unable to recover it. 00:27:17.745 [2024-11-20 07:23:22.183302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.745 [2024-11-20 07:23:22.183333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.745 qpair failed and we were unable to recover it. 00:27:17.745 [2024-11-20 07:23:22.183455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.745 [2024-11-20 07:23:22.183486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.745 qpair failed and we were unable to recover it. 00:27:17.745 [2024-11-20 07:23:22.183591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.745 [2024-11-20 07:23:22.183622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.745 qpair failed and we were unable to recover it. 00:27:17.745 [2024-11-20 07:23:22.183749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.745 [2024-11-20 07:23:22.183781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:17.745 qpair failed and we were unable to recover it. 00:27:17.745 [2024-11-20 07:23:22.184011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.745 [2024-11-20 07:23:22.184051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.745 qpair failed and we were unable to recover it. 00:27:17.745 [2024-11-20 07:23:22.184162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.745 [2024-11-20 07:23:22.184196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.745 qpair failed and we were unable to recover it. 00:27:17.745 [2024-11-20 07:23:22.184383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.745 [2024-11-20 07:23:22.184416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.745 qpair failed and we were unable to recover it. 00:27:17.745 [2024-11-20 07:23:22.184536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.745 [2024-11-20 07:23:22.184568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.745 qpair failed and we were unable to recover it. 00:27:17.745 [2024-11-20 07:23:22.184705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.745 [2024-11-20 07:23:22.184737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.745 qpair failed and we were unable to recover it. 00:27:17.745 [2024-11-20 07:23:22.184876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.745 [2024-11-20 07:23:22.184909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.745 qpair failed and we were unable to recover it. 00:27:17.745 [2024-11-20 07:23:22.185045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.745 [2024-11-20 07:23:22.185078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.745 qpair failed and we were unable to recover it. 00:27:17.745 [2024-11-20 07:23:22.185199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.745 [2024-11-20 07:23:22.185232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.746 qpair failed and we were unable to recover it. 00:27:17.746 [2024-11-20 07:23:22.185432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.746 [2024-11-20 07:23:22.185465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.746 qpair failed and we were unable to recover it. 00:27:17.746 [2024-11-20 07:23:22.185647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.746 [2024-11-20 07:23:22.185679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.746 qpair failed and we were unable to recover it. 00:27:17.746 [2024-11-20 07:23:22.185862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.746 [2024-11-20 07:23:22.185893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.746 qpair failed and we were unable to recover it. 00:27:17.746 [2024-11-20 07:23:22.186055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.746 [2024-11-20 07:23:22.186090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.746 qpair failed and we were unable to recover it. 00:27:17.746 [2024-11-20 07:23:22.186198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.746 [2024-11-20 07:23:22.186230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.746 qpair failed and we were unable to recover it. 00:27:17.746 [2024-11-20 07:23:22.186359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.746 [2024-11-20 07:23:22.186398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.746 qpair failed and we were unable to recover it. 00:27:17.746 [2024-11-20 07:23:22.186579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.746 [2024-11-20 07:23:22.186610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.746 qpair failed and we were unable to recover it. 00:27:17.746 [2024-11-20 07:23:22.186719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.746 [2024-11-20 07:23:22.186740] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:27:17.746 [2024-11-20 07:23:22.186749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.746 qpair failed and we were unable to recover it. 00:27:17.746 [2024-11-20 07:23:22.186780] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:17.746 [2024-11-20 07:23:22.186880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.746 [2024-11-20 07:23:22.186911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.746 qpair failed and we were unable to recover it. 00:27:17.746 [2024-11-20 07:23:22.187100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.746 [2024-11-20 07:23:22.187132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.746 qpair failed and we were unable to recover it. 00:27:17.746 [2024-11-20 07:23:22.187372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.746 [2024-11-20 07:23:22.187404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.746 qpair failed and we were unable to recover it. 00:27:17.746 [2024-11-20 07:23:22.187587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.746 [2024-11-20 07:23:22.187619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.746 qpair failed and we were unable to recover it. 00:27:17.746 [2024-11-20 07:23:22.187729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.746 [2024-11-20 07:23:22.187760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.746 qpair failed and we were unable to recover it. 00:27:17.746 [2024-11-20 07:23:22.187884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.746 [2024-11-20 07:23:22.187915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.746 qpair failed and we were unable to recover it. 00:27:17.746 [2024-11-20 07:23:22.188051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.746 [2024-11-20 07:23:22.188085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.746 qpair failed and we were unable to recover it. 00:27:17.746 [2024-11-20 07:23:22.188330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.746 [2024-11-20 07:23:22.188362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.746 qpair failed and we were unable to recover it. 00:27:17.746 [2024-11-20 07:23:22.188639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.746 [2024-11-20 07:23:22.188671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.746 qpair failed and we were unable to recover it. 00:27:17.746 [2024-11-20 07:23:22.188795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.746 [2024-11-20 07:23:22.188827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.746 qpair failed and we were unable to recover it. 00:27:17.746 [2024-11-20 07:23:22.189019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.746 [2024-11-20 07:23:22.189054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.746 qpair failed and we were unable to recover it. 00:27:17.746 [2024-11-20 07:23:22.189171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.746 [2024-11-20 07:23:22.189203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.746 qpair failed and we were unable to recover it. 00:27:17.746 [2024-11-20 07:23:22.189323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.746 [2024-11-20 07:23:22.189355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.746 qpair failed and we were unable to recover it. 00:27:17.746 [2024-11-20 07:23:22.189474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.746 [2024-11-20 07:23:22.189508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.746 qpair failed and we were unable to recover it. 00:27:17.746 [2024-11-20 07:23:22.189708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.746 [2024-11-20 07:23:22.189740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.746 qpair failed and we were unable to recover it. 00:27:17.746 [2024-11-20 07:23:22.189985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.746 [2024-11-20 07:23:22.190018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.746 qpair failed and we were unable to recover it. 00:27:17.746 [2024-11-20 07:23:22.190197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.746 [2024-11-20 07:23:22.190228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.746 qpair failed and we were unable to recover it. 00:27:17.746 [2024-11-20 07:23:22.190402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.746 [2024-11-20 07:23:22.190433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.746 qpair failed and we were unable to recover it. 00:27:17.746 [2024-11-20 07:23:22.190550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.746 [2024-11-20 07:23:22.190583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.746 qpair failed and we were unable to recover it. 00:27:17.746 [2024-11-20 07:23:22.190688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.746 [2024-11-20 07:23:22.190720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.746 qpair failed and we were unable to recover it. 00:27:17.746 [2024-11-20 07:23:22.190921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.746 [2024-11-20 07:23:22.190961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.746 qpair failed and we were unable to recover it. 00:27:17.746 [2024-11-20 07:23:22.191157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.746 [2024-11-20 07:23:22.191190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.746 qpair failed and we were unable to recover it. 00:27:17.746 [2024-11-20 07:23:22.191320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.746 [2024-11-20 07:23:22.191354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.746 qpair failed and we were unable to recover it. 00:27:17.746 [2024-11-20 07:23:22.191544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.746 [2024-11-20 07:23:22.191577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.746 qpair failed and we were unable to recover it. 00:27:17.746 [2024-11-20 07:23:22.191750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.746 [2024-11-20 07:23:22.191782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.746 qpair failed and we were unable to recover it. 00:27:17.746 [2024-11-20 07:23:22.191905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.746 [2024-11-20 07:23:22.191938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.746 qpair failed and we were unable to recover it. 00:27:17.746 [2024-11-20 07:23:22.192145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.746 [2024-11-20 07:23:22.192178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.746 qpair failed and we were unable to recover it. 00:27:17.746 [2024-11-20 07:23:22.192317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.746 [2024-11-20 07:23:22.192349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.746 qpair failed and we were unable to recover it. 00:27:17.746 [2024-11-20 07:23:22.192535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.746 [2024-11-20 07:23:22.192567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.746 qpair failed and we were unable to recover it. 00:27:17.746 [2024-11-20 07:23:22.192741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.746 [2024-11-20 07:23:22.192773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.747 qpair failed and we were unable to recover it. 00:27:17.747 [2024-11-20 07:23:22.192892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.747 [2024-11-20 07:23:22.192924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.747 qpair failed and we were unable to recover it. 00:27:17.747 [2024-11-20 07:23:22.193159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.747 [2024-11-20 07:23:22.193192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.747 qpair failed and we were unable to recover it. 00:27:17.747 [2024-11-20 07:23:22.193382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.747 [2024-11-20 07:23:22.193416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.747 qpair failed and we were unable to recover it. 00:27:17.747 [2024-11-20 07:23:22.193709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.747 [2024-11-20 07:23:22.193742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.747 qpair failed and we were unable to recover it. 00:27:17.747 [2024-11-20 07:23:22.193931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.747 [2024-11-20 07:23:22.193975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.747 qpair failed and we were unable to recover it. 00:27:17.747 [2024-11-20 07:23:22.194222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.747 [2024-11-20 07:23:22.194254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.747 qpair failed and we were unable to recover it. 00:27:17.747 [2024-11-20 07:23:22.194420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.747 [2024-11-20 07:23:22.194457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.747 qpair failed and we were unable to recover it. 00:27:17.747 [2024-11-20 07:23:22.194652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.747 [2024-11-20 07:23:22.194685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.747 qpair failed and we were unable to recover it. 00:27:17.747 [2024-11-20 07:23:22.194809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.747 [2024-11-20 07:23:22.194842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.747 qpair failed and we were unable to recover it. 00:27:17.747 [2024-11-20 07:23:22.194986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.747 [2024-11-20 07:23:22.195020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.747 qpair failed and we were unable to recover it. 00:27:17.747 [2024-11-20 07:23:22.195149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.747 [2024-11-20 07:23:22.195180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.747 qpair failed and we were unable to recover it. 00:27:17.747 [2024-11-20 07:23:22.195374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.747 [2024-11-20 07:23:22.195406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.747 qpair failed and we were unable to recover it. 00:27:17.747 [2024-11-20 07:23:22.195530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.747 [2024-11-20 07:23:22.195562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.747 qpair failed and we were unable to recover it. 00:27:17.747 [2024-11-20 07:23:22.195737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.747 [2024-11-20 07:23:22.195769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.747 qpair failed and we were unable to recover it. 00:27:17.747 [2024-11-20 07:23:22.195885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.747 [2024-11-20 07:23:22.195917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.747 qpair failed and we were unable to recover it. 00:27:17.747 [2024-11-20 07:23:22.196185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.747 [2024-11-20 07:23:22.196219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.747 qpair failed and we were unable to recover it. 00:27:17.747 [2024-11-20 07:23:22.196362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.747 [2024-11-20 07:23:22.196394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.747 qpair failed and we were unable to recover it. 00:27:17.747 [2024-11-20 07:23:22.196585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.747 [2024-11-20 07:23:22.196617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.747 qpair failed and we were unable to recover it. 00:27:17.747 [2024-11-20 07:23:22.196858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.747 [2024-11-20 07:23:22.196890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.747 qpair failed and we were unable to recover it. 00:27:17.747 [2024-11-20 07:23:22.197035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.747 [2024-11-20 07:23:22.197068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.747 qpair failed and we were unable to recover it. 00:27:17.747 [2024-11-20 07:23:22.197251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.747 [2024-11-20 07:23:22.197283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.747 qpair failed and we were unable to recover it. 00:27:17.747 [2024-11-20 07:23:22.197525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.747 [2024-11-20 07:23:22.197557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.747 qpair failed and we were unable to recover it. 00:27:17.747 [2024-11-20 07:23:22.197768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.747 [2024-11-20 07:23:22.197801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.747 qpair failed and we were unable to recover it. 00:27:17.747 [2024-11-20 07:23:22.198067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.747 [2024-11-20 07:23:22.198100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.747 qpair failed and we were unable to recover it. 00:27:17.747 [2024-11-20 07:23:22.198286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.747 [2024-11-20 07:23:22.198319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.747 qpair failed and we were unable to recover it. 00:27:17.747 [2024-11-20 07:23:22.198514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.747 [2024-11-20 07:23:22.198547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.747 qpair failed and we were unable to recover it. 00:27:17.747 [2024-11-20 07:23:22.198734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.747 [2024-11-20 07:23:22.198766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.747 qpair failed and we were unable to recover it. 00:27:17.747 [2024-11-20 07:23:22.199035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.747 [2024-11-20 07:23:22.199068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.747 qpair failed and we were unable to recover it. 00:27:17.747 [2024-11-20 07:23:22.199239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.747 [2024-11-20 07:23:22.199272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.747 qpair failed and we were unable to recover it. 00:27:17.747 [2024-11-20 07:23:22.199460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.747 [2024-11-20 07:23:22.199493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.747 qpair failed and we were unable to recover it. 00:27:17.747 [2024-11-20 07:23:22.199791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.747 [2024-11-20 07:23:22.199824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.747 qpair failed and we were unable to recover it. 00:27:17.747 [2024-11-20 07:23:22.200110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.747 [2024-11-20 07:23:22.200145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.747 qpair failed and we were unable to recover it. 00:27:17.747 [2024-11-20 07:23:22.200290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.747 [2024-11-20 07:23:22.200323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.747 qpair failed and we were unable to recover it. 00:27:17.747 [2024-11-20 07:23:22.200522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.747 [2024-11-20 07:23:22.200556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.747 qpair failed and we were unable to recover it. 00:27:17.747 [2024-11-20 07:23:22.200823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.747 [2024-11-20 07:23:22.200856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.747 qpair failed and we were unable to recover it. 00:27:17.747 [2024-11-20 07:23:22.201023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.747 [2024-11-20 07:23:22.201057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.747 qpair failed and we were unable to recover it. 00:27:17.747 [2024-11-20 07:23:22.201272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.747 [2024-11-20 07:23:22.201305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.747 qpair failed and we were unable to recover it. 00:27:17.747 [2024-11-20 07:23:22.201496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.747 [2024-11-20 07:23:22.201529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.747 qpair failed and we were unable to recover it. 00:27:17.747 [2024-11-20 07:23:22.201706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.747 [2024-11-20 07:23:22.201737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.747 qpair failed and we were unable to recover it. 00:27:17.747 [2024-11-20 07:23:22.201859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.747 [2024-11-20 07:23:22.201891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.747 qpair failed and we were unable to recover it. 00:27:17.747 [2024-11-20 07:23:22.202097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.748 [2024-11-20 07:23:22.202129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.748 qpair failed and we were unable to recover it. 00:27:17.748 [2024-11-20 07:23:22.202275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.748 [2024-11-20 07:23:22.202307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.748 qpair failed and we were unable to recover it. 00:27:17.748 [2024-11-20 07:23:22.202431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.748 [2024-11-20 07:23:22.202462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.748 qpair failed and we were unable to recover it. 00:27:17.748 [2024-11-20 07:23:22.202633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.748 [2024-11-20 07:23:22.202664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.748 qpair failed and we were unable to recover it. 00:27:17.748 [2024-11-20 07:23:22.202811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.748 [2024-11-20 07:23:22.202842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.748 qpair failed and we were unable to recover it. 00:27:17.748 [2024-11-20 07:23:22.202983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.748 [2024-11-20 07:23:22.203018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.748 qpair failed and we were unable to recover it. 00:27:17.748 [2024-11-20 07:23:22.203158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.748 [2024-11-20 07:23:22.203195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.748 qpair failed and we were unable to recover it. 00:27:17.748 [2024-11-20 07:23:22.203386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.748 [2024-11-20 07:23:22.203418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.748 qpair failed and we were unable to recover it. 00:27:17.748 [2024-11-20 07:23:22.203622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.748 [2024-11-20 07:23:22.203653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.748 qpair failed and we were unable to recover it. 00:27:17.748 [2024-11-20 07:23:22.203966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.748 [2024-11-20 07:23:22.203999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.748 qpair failed and we were unable to recover it. 00:27:17.748 [2024-11-20 07:23:22.204206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.748 [2024-11-20 07:23:22.204238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.748 qpair failed and we were unable to recover it. 00:27:17.748 [2024-11-20 07:23:22.204433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.748 [2024-11-20 07:23:22.204465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.748 qpair failed and we were unable to recover it. 00:27:17.748 [2024-11-20 07:23:22.204611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.748 [2024-11-20 07:23:22.204642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.748 qpair failed and we were unable to recover it. 00:27:17.748 [2024-11-20 07:23:22.204915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.748 [2024-11-20 07:23:22.204974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.748 qpair failed and we were unable to recover it. 00:27:17.748 [2024-11-20 07:23:22.205153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.748 [2024-11-20 07:23:22.205186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.748 qpair failed and we were unable to recover it. 00:27:17.748 [2024-11-20 07:23:22.205366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.748 [2024-11-20 07:23:22.205399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.748 qpair failed and we were unable to recover it. 00:27:17.748 [2024-11-20 07:23:22.205545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.748 [2024-11-20 07:23:22.205576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.748 qpair failed and we were unable to recover it. 00:27:17.748 [2024-11-20 07:23:22.205842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.748 [2024-11-20 07:23:22.205874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.748 qpair failed and we were unable to recover it. 00:27:17.748 [2024-11-20 07:23:22.206112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.748 [2024-11-20 07:23:22.206145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.748 qpair failed and we were unable to recover it. 00:27:17.748 [2024-11-20 07:23:22.206335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.748 [2024-11-20 07:23:22.206367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.748 qpair failed and we were unable to recover it. 00:27:17.748 [2024-11-20 07:23:22.206492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.748 [2024-11-20 07:23:22.206523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.748 qpair failed and we were unable to recover it. 00:27:17.748 [2024-11-20 07:23:22.206715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.748 [2024-11-20 07:23:22.206747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.748 qpair failed and we were unable to recover it. 00:27:17.748 [2024-11-20 07:23:22.206885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.748 [2024-11-20 07:23:22.206917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.748 qpair failed and we were unable to recover it. 00:27:17.748 [2024-11-20 07:23:22.207184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.748 [2024-11-20 07:23:22.207254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.748 qpair failed and we were unable to recover it. 00:27:17.748 [2024-11-20 07:23:22.207468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.748 [2024-11-20 07:23:22.207511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.748 qpair failed and we were unable to recover it. 00:27:17.748 [2024-11-20 07:23:22.207659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.748 [2024-11-20 07:23:22.207691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.748 qpair failed and we were unable to recover it. 00:27:17.748 [2024-11-20 07:23:22.207965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.748 [2024-11-20 07:23:22.207999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.748 qpair failed and we were unable to recover it. 00:27:17.748 [2024-11-20 07:23:22.208158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.748 [2024-11-20 07:23:22.208191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.748 qpair failed and we were unable to recover it. 00:27:17.748 [2024-11-20 07:23:22.208393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.748 [2024-11-20 07:23:22.208425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.748 qpair failed and we were unable to recover it. 00:27:17.748 [2024-11-20 07:23:22.208713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.748 [2024-11-20 07:23:22.208745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.748 qpair failed and we were unable to recover it. 00:27:17.748 [2024-11-20 07:23:22.209034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.748 [2024-11-20 07:23:22.209067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.748 qpair failed and we were unable to recover it. 00:27:17.748 [2024-11-20 07:23:22.209200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.748 [2024-11-20 07:23:22.209232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.748 qpair failed and we were unable to recover it. 00:27:17.749 [2024-11-20 07:23:22.209424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.749 [2024-11-20 07:23:22.209456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:17.749 qpair failed and we were unable to recover it. 00:27:17.749 [2024-11-20 07:23:22.209719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.749 [2024-11-20 07:23:22.209758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.749 qpair failed and we were unable to recover it. 00:27:17.749 [2024-11-20 07:23:22.210036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.749 [2024-11-20 07:23:22.210071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.749 qpair failed and we were unable to recover it. 00:27:17.749 [2024-11-20 07:23:22.210262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.749 [2024-11-20 07:23:22.210293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.749 qpair failed and we were unable to recover it. 00:27:17.749 [2024-11-20 07:23:22.210474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.749 [2024-11-20 07:23:22.210507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.749 qpair failed and we were unable to recover it. 00:27:17.749 [2024-11-20 07:23:22.210748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.749 [2024-11-20 07:23:22.210780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.749 qpair failed and we were unable to recover it. 00:27:17.749 [2024-11-20 07:23:22.210961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.749 [2024-11-20 07:23:22.210995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.749 qpair failed and we were unable to recover it. 00:27:17.749 [2024-11-20 07:23:22.211142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.749 [2024-11-20 07:23:22.211174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.749 qpair failed and we were unable to recover it. 00:27:17.749 [2024-11-20 07:23:22.211300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.749 [2024-11-20 07:23:22.211332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.749 qpair failed and we were unable to recover it. 00:27:17.749 [2024-11-20 07:23:22.211487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.749 [2024-11-20 07:23:22.211521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.749 qpair failed and we were unable to recover it. 00:27:17.749 [2024-11-20 07:23:22.211832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.749 [2024-11-20 07:23:22.211879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.749 qpair failed and we were unable to recover it. 00:27:17.749 [2024-11-20 07:23:22.212128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.749 [2024-11-20 07:23:22.212174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:17.749 qpair failed and we were unable to recover it. 00:27:17.749 [2024-11-20 07:23:22.212317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.749 [2024-11-20 07:23:22.212353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.749 qpair failed and we were unable to recover it. 00:27:17.749 [2024-11-20 07:23:22.212485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.749 [2024-11-20 07:23:22.212518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.749 qpair failed and we were unable to recover it. 00:27:17.749 [2024-11-20 07:23:22.212644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.749 [2024-11-20 07:23:22.212683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.749 qpair failed and we were unable to recover it. 00:27:17.749 [2024-11-20 07:23:22.212869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.749 [2024-11-20 07:23:22.212902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.749 qpair failed and we were unable to recover it. 00:27:17.749 [2024-11-20 07:23:22.213065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.749 [2024-11-20 07:23:22.213099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.749 qpair failed and we were unable to recover it. 00:27:17.749 [2024-11-20 07:23:22.213342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.749 [2024-11-20 07:23:22.213376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.749 qpair failed and we were unable to recover it. 00:27:17.749 [2024-11-20 07:23:22.213571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.749 [2024-11-20 07:23:22.213603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.749 qpair failed and we were unable to recover it. 00:27:17.749 [2024-11-20 07:23:22.213879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.749 [2024-11-20 07:23:22.213913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.749 qpair failed and we were unable to recover it. 00:27:17.749 [2024-11-20 07:23:22.214117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.749 [2024-11-20 07:23:22.214150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.749 qpair failed and we were unable to recover it. 00:27:17.749 [2024-11-20 07:23:22.214324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.749 [2024-11-20 07:23:22.214356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.749 qpair failed and we were unable to recover it. 00:27:17.749 [2024-11-20 07:23:22.214488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.749 [2024-11-20 07:23:22.214520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.749 qpair failed and we were unable to recover it. 00:27:17.749 [2024-11-20 07:23:22.214715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.749 [2024-11-20 07:23:22.214748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.749 qpair failed and we were unable to recover it. 00:27:17.749 [2024-11-20 07:23:22.214939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.749 [2024-11-20 07:23:22.214978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.749 qpair failed and we were unable to recover it. 00:27:17.749 [2024-11-20 07:23:22.215126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.749 [2024-11-20 07:23:22.215159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.749 qpair failed and we were unable to recover it. 00:27:17.749 [2024-11-20 07:23:22.215353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.749 [2024-11-20 07:23:22.215385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.750 qpair failed and we were unable to recover it. 00:27:17.750 [2024-11-20 07:23:22.215711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.750 [2024-11-20 07:23:22.215743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.750 qpair failed and we were unable to recover it. 00:27:17.750 [2024-11-20 07:23:22.215973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.750 [2024-11-20 07:23:22.216007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.750 qpair failed and we were unable to recover it. 00:27:17.750 [2024-11-20 07:23:22.216205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.750 [2024-11-20 07:23:22.216236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.750 qpair failed and we were unable to recover it. 00:27:17.750 [2024-11-20 07:23:22.216379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.750 [2024-11-20 07:23:22.216411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.750 qpair failed and we were unable to recover it. 00:27:17.750 [2024-11-20 07:23:22.216637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.750 [2024-11-20 07:23:22.216669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.750 qpair failed and we were unable to recover it. 00:27:17.750 [2024-11-20 07:23:22.216856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.750 [2024-11-20 07:23:22.216888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.750 qpair failed and we were unable to recover it. 00:27:17.750 [2024-11-20 07:23:22.217080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:17.750 [2024-11-20 07:23:22.217113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:17.750 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 07:23:22.217385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 07:23:22.217418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 07:23:22.217739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 07:23:22.217774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 07:23:22.218043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 07:23:22.218077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 07:23:22.218354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 07:23:22.218385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 07:23:22.218510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 07:23:22.218542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 07:23:22.218874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 07:23:22.218906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 07:23:22.219136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 07:23:22.219170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 07:23:22.219350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 07:23:22.219383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 07:23:22.219573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 07:23:22.219605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 07:23:22.219786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 07:23:22.219818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 07:23:22.220066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 07:23:22.220100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 07:23:22.220279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 07:23:22.220311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 07:23:22.220499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 07:23:22.220531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 07:23:22.220716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 07:23:22.220748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 07:23:22.221012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 07:23:22.221044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 07:23:22.221230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 07:23:22.221261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 07:23:22.221476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 07:23:22.221508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 07:23:22.221782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 07:23:22.221815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 07:23:22.221938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 07:23:22.221980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 07:23:22.222180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.024 [2024-11-20 07:23:22.222212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.024 qpair failed and we were unable to recover it. 00:27:18.024 [2024-11-20 07:23:22.222361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 07:23:22.222404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 07:23:22.222634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 07:23:22.222665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 07:23:22.222842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 07:23:22.222873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 07:23:22.223134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 07:23:22.223167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 07:23:22.223385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 07:23:22.223417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 07:23:22.223544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 07:23:22.223576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 07:23:22.223845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 07:23:22.223878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 07:23:22.224094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 07:23:22.224128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 07:23:22.224270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 07:23:22.224302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 07:23:22.224553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 07:23:22.224584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 07:23:22.224879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 07:23:22.224911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 07:23:22.225175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 07:23:22.225207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 07:23:22.225421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 07:23:22.225452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 07:23:22.225696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 07:23:22.225730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 07:23:22.225978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 07:23:22.226011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 07:23:22.226200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 07:23:22.226232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 07:23:22.226491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 07:23:22.226522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 07:23:22.226755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 07:23:22.226787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 07:23:22.227035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 07:23:22.227068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 07:23:22.227264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 07:23:22.227296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 07:23:22.227591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 07:23:22.227622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 07:23:22.227815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 07:23:22.227847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 07:23:22.228059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 07:23:22.228093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 07:23:22.228305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 07:23:22.228337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 07:23:22.228606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 07:23:22.228638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 07:23:22.228812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 07:23:22.228843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 07:23:22.229085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 07:23:22.229118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 07:23:22.229325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 07:23:22.229378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 07:23:22.229597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 07:23:22.229631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 07:23:22.229882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 07:23:22.229915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 07:23:22.230199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.025 [2024-11-20 07:23:22.230234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.025 qpair failed and we were unable to recover it. 00:27:18.025 [2024-11-20 07:23:22.230370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 07:23:22.230402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 07:23:22.230700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 07:23:22.230732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 07:23:22.230973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 07:23:22.231007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 07:23:22.231128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 07:23:22.231160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 07:23:22.231424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 07:23:22.231457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 07:23:22.231724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 07:23:22.231756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 07:23:22.232029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 07:23:22.232063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 07:23:22.232246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 07:23:22.232277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 07:23:22.232548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 07:23:22.232581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 07:23:22.232770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 07:23:22.232806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 07:23:22.233070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 07:23:22.233104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 07:23:22.233237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 07:23:22.233269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 07:23:22.233473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 07:23:22.233504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 07:23:22.233679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 07:23:22.233711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 07:23:22.233960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 07:23:22.233993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 07:23:22.234179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 07:23:22.234211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 07:23:22.234394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 07:23:22.234426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 07:23:22.234549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 07:23:22.234581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 07:23:22.234778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 07:23:22.234810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 07:23:22.235061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 07:23:22.235094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 07:23:22.235235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 07:23:22.235268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 07:23:22.235506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 07:23:22.235538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 07:23:22.235789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 07:23:22.235821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 07:23:22.236072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 07:23:22.236106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 07:23:22.236311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 07:23:22.236343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 07:23:22.236490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 07:23:22.236521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.026 qpair failed and we were unable to recover it. 00:27:18.026 [2024-11-20 07:23:22.236653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.026 [2024-11-20 07:23:22.236685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 07:23:22.236984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 07:23:22.237017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 07:23:22.237208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 07:23:22.237241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 07:23:22.237455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 07:23:22.237486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 07:23:22.237683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 07:23:22.237715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 07:23:22.237960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 07:23:22.237993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 07:23:22.238132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 07:23:22.238163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 07:23:22.238353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 07:23:22.238386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 07:23:22.238565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 07:23:22.238597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 07:23:22.238868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 07:23:22.238900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 07:23:22.239075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 07:23:22.239127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 07:23:22.239338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 07:23:22.239373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 07:23:22.239623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 07:23:22.239654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 07:23:22.239941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 07:23:22.239986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 07:23:22.240192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 07:23:22.240224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 07:23:22.240363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 07:23:22.240394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 07:23:22.240577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 07:23:22.240608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 07:23:22.240878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 07:23:22.240910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 07:23:22.241197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 07:23:22.241230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 07:23:22.241416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 07:23:22.241448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 07:23:22.241752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 07:23:22.241783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 07:23:22.242003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 07:23:22.242036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 07:23:22.242166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 07:23:22.242198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 07:23:22.242403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 07:23:22.242435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 07:23:22.242630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 07:23:22.242662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 07:23:22.242923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 07:23:22.242965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 07:23:22.243158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 07:23:22.243188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 07:23:22.243383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 07:23:22.243415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 07:23:22.243616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 07:23:22.243649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 07:23:22.243914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 07:23:22.243945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 07:23:22.244086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 07:23:22.244118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 07:23:22.244430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.027 [2024-11-20 07:23:22.244463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.027 qpair failed and we were unable to recover it. 00:27:18.027 [2024-11-20 07:23:22.244726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 07:23:22.244757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 07:23:22.244962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 07:23:22.244995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 07:23:22.245152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 07:23:22.245185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 07:23:22.245398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 07:23:22.245430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 07:23:22.245637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 07:23:22.245668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 07:23:22.245932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 07:23:22.245987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 07:23:22.246182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 07:23:22.246213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 07:23:22.246402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 07:23:22.246434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 07:23:22.246750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 07:23:22.246782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 07:23:22.246992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 07:23:22.247025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 07:23:22.247165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 07:23:22.247197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 07:23:22.247327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 07:23:22.247359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 07:23:22.247632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 07:23:22.247664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 07:23:22.247853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 07:23:22.247885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 07:23:22.248188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 07:23:22.248220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 07:23:22.248474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 07:23:22.248506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 07:23:22.248690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 07:23:22.248721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 07:23:22.248991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 07:23:22.249024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 07:23:22.249194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 07:23:22.249225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 07:23:22.249415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 07:23:22.249447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 07:23:22.249649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 07:23:22.249681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 07:23:22.249871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 07:23:22.249903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 07:23:22.250181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 07:23:22.250215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 07:23:22.250407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 07:23:22.250439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 07:23:22.250722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 07:23:22.250754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 07:23:22.250890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 07:23:22.250922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 07:23:22.251127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 07:23:22.251159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 07:23:22.251414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 07:23:22.251446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 07:23:22.251570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 07:23:22.251601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 07:23:22.251874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 07:23:22.251905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 07:23:22.252142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 07:23:22.252175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.028 qpair failed and we were unable to recover it. 00:27:18.028 [2024-11-20 07:23:22.252413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.028 [2024-11-20 07:23:22.252445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 07:23:22.252630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 07:23:22.252667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 07:23:22.252790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 07:23:22.252822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 07:23:22.253061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 07:23:22.253094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 07:23:22.253232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 07:23:22.253264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 07:23:22.253461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 07:23:22.253492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 07:23:22.253614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 07:23:22.253645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 07:23:22.253921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 07:23:22.253962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 07:23:22.254150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 07:23:22.254181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 07:23:22.254300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 07:23:22.254331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 07:23:22.254478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 07:23:22.254509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 07:23:22.254816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 07:23:22.254846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 07:23:22.255034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 07:23:22.255066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 07:23:22.255311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 07:23:22.255344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 07:23:22.255530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 07:23:22.255560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 07:23:22.255789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 07:23:22.255820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 07:23:22.256017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 07:23:22.256050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 07:23:22.256239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 07:23:22.256271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 07:23:22.256485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 07:23:22.256517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 07:23:22.256779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 07:23:22.256811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 07:23:22.256942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 07:23:22.256986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 07:23:22.257231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 07:23:22.257263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 07:23:22.257403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 07:23:22.257435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 07:23:22.257671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 07:23:22.257702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 07:23:22.257969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 07:23:22.258003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 07:23:22.258137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 07:23:22.258168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 07:23:22.258364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 07:23:22.258397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 07:23:22.258618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 07:23:22.258650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 07:23:22.258847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 07:23:22.258879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.029 [2024-11-20 07:23:22.259103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.029 [2024-11-20 07:23:22.259136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.029 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 07:23:22.259274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 07:23:22.259307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 07:23:22.259478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 07:23:22.259510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 07:23:22.259725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 07:23:22.259757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 07:23:22.259927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 07:23:22.259966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 07:23:22.260104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 07:23:22.260134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 07:23:22.260271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 07:23:22.260301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 07:23:22.260441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 07:23:22.260473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 07:23:22.260603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 07:23:22.260634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 07:23:22.260769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 07:23:22.260800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 07:23:22.260987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 07:23:22.261022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 07:23:22.261240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 07:23:22.261272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 07:23:22.261405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 07:23:22.261436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 07:23:22.261759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 07:23:22.261797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 07:23:22.262012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 07:23:22.262046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 07:23:22.262229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 07:23:22.262262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 07:23:22.262458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 07:23:22.262491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 07:23:22.262720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 07:23:22.262752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 07:23:22.263070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 07:23:22.263104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 07:23:22.263295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 07:23:22.263327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 07:23:22.263519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 07:23:22.263551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 07:23:22.263762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 07:23:22.263795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 07:23:22.263985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 07:23:22.264020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 07:23:22.264209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 07:23:22.264241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 07:23:22.264435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 07:23:22.264467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 07:23:22.264662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 07:23:22.264694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 07:23:22.264825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 07:23:22.264867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 07:23:22.265054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 07:23:22.265088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 07:23:22.265222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 07:23:22.265256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.030 [2024-11-20 07:23:22.265384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.030 [2024-11-20 07:23:22.265416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.030 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 07:23:22.265706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 07:23:22.265738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 07:23:22.266007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 07:23:22.266041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 07:23:22.266235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 07:23:22.266266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 07:23:22.266462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 07:23:22.266493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 07:23:22.266690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 07:23:22.266725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 07:23:22.266910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 07:23:22.266942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 07:23:22.267075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 07:23:22.267109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 07:23:22.267291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 07:23:22.267322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 07:23:22.267583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 07:23:22.267616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 07:23:22.267890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 07:23:22.267923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 07:23:22.268080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 07:23:22.268112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 07:23:22.268256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 07:23:22.268287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 07:23:22.268415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 07:23:22.268445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 07:23:22.268683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:18.031 [2024-11-20 07:23:22.268682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 07:23:22.268714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 07:23:22.268996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 07:23:22.269029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 07:23:22.269174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 07:23:22.269206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 07:23:22.269446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 07:23:22.269478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 07:23:22.269696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 07:23:22.269727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 07:23:22.269894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 07:23:22.269925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 07:23:22.270099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 07:23:22.270133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 07:23:22.270374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 07:23:22.270406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 07:23:22.270602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 07:23:22.270635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 07:23:22.270902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 07:23:22.270934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 07:23:22.271197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 07:23:22.271229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 07:23:22.271416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 07:23:22.271448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 07:23:22.271731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 07:23:22.271762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 07:23:22.272021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 07:23:22.272056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 07:23:22.272270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 07:23:22.272302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 07:23:22.272515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 07:23:22.272547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.031 [2024-11-20 07:23:22.272794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.031 [2024-11-20 07:23:22.272826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.031 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 07:23:22.273056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 07:23:22.273090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 07:23:22.273216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 07:23:22.273248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 07:23:22.273442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 07:23:22.273474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 07:23:22.273669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 07:23:22.273701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 07:23:22.273969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 07:23:22.274002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 07:23:22.274148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 07:23:22.274180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 07:23:22.274362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 07:23:22.274425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 07:23:22.274703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 07:23:22.274737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 07:23:22.275023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 07:23:22.275059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 07:23:22.275253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 07:23:22.275287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 07:23:22.275532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 07:23:22.275564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 07:23:22.275779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 07:23:22.275811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 07:23:22.276083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 07:23:22.276119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 07:23:22.276265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 07:23:22.276297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 07:23:22.276475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 07:23:22.276507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 07:23:22.276697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 07:23:22.276728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 07:23:22.276996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 07:23:22.277030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 07:23:22.277231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 07:23:22.277263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 07:23:22.277511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 07:23:22.277542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 07:23:22.277777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 07:23:22.277807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 07:23:22.278082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 07:23:22.278117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 07:23:22.278359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 07:23:22.278392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 07:23:22.278589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 07:23:22.278622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 07:23:22.278894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 07:23:22.278927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 07:23:22.279139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 07:23:22.279173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 07:23:22.279300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 07:23:22.279332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 07:23:22.279460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 07:23:22.279491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 07:23:22.279712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 07:23:22.279745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 07:23:22.280013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 07:23:22.280047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 07:23:22.280246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 07:23:22.280279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.032 [2024-11-20 07:23:22.280528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.032 [2024-11-20 07:23:22.280560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.032 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 07:23:22.280760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 07:23:22.280792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 07:23:22.281031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 07:23:22.281064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 07:23:22.281266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 07:23:22.281298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 07:23:22.281489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 07:23:22.281522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 07:23:22.281847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 07:23:22.281881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 07:23:22.282136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 07:23:22.282169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 07:23:22.282422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 07:23:22.282454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 07:23:22.282697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 07:23:22.282729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 07:23:22.282963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 07:23:22.282997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 07:23:22.283238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 07:23:22.283270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 07:23:22.283513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 07:23:22.283546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 07:23:22.283801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 07:23:22.283833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 07:23:22.284088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 07:23:22.284122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 07:23:22.284244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 07:23:22.284276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 07:23:22.284475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 07:23:22.284507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 07:23:22.284769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 07:23:22.284808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 07:23:22.285021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 07:23:22.285056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 07:23:22.285322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 07:23:22.285354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 07:23:22.285598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 07:23:22.285631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 07:23:22.285897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 07:23:22.285930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 07:23:22.286138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 07:23:22.286172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 07:23:22.286421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 07:23:22.286453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 07:23:22.286726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 07:23:22.286757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 07:23:22.287045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 07:23:22.287078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 07:23:22.287352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 07:23:22.287384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 07:23:22.287625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 07:23:22.287658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 07:23:22.287846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 07:23:22.287878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 07:23:22.288062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 07:23:22.288096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 07:23:22.288366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 07:23:22.288397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 07:23:22.288599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 07:23:22.288631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 07:23:22.288888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.033 [2024-11-20 07:23:22.288921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.033 qpair failed and we were unable to recover it. 00:27:18.033 [2024-11-20 07:23:22.289133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 07:23:22.289167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 07:23:22.289376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 07:23:22.289409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 07:23:22.289664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 07:23:22.289697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 07:23:22.289971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 07:23:22.290004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 07:23:22.290151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 07:23:22.290184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 07:23:22.290324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 07:23:22.290357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 07:23:22.290601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 07:23:22.290631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 07:23:22.290866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 07:23:22.290898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 07:23:22.291082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 07:23:22.291115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 07:23:22.291297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 07:23:22.291329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 07:23:22.291474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 07:23:22.291506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 07:23:22.291729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 07:23:22.291763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 07:23:22.291935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 07:23:22.291977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 07:23:22.292243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 07:23:22.292276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 07:23:22.292513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 07:23:22.292545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 07:23:22.292796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 07:23:22.292829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 07:23:22.293088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 07:23:22.293122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 07:23:22.293364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 07:23:22.293397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 07:23:22.293691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 07:23:22.293723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 07:23:22.293988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 07:23:22.294023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 07:23:22.294211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 07:23:22.294243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 07:23:22.294376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 07:23:22.294409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 07:23:22.294672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 07:23:22.294703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 07:23:22.294894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 07:23:22.294927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 07:23:22.295133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 07:23:22.295172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 07:23:22.295441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 07:23:22.295474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 07:23:22.295752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 07:23:22.295785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 07:23:22.296044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 07:23:22.296078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 07:23:22.296223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 07:23:22.296255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 07:23:22.296402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 07:23:22.296434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 07:23:22.296698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 07:23:22.296730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.034 qpair failed and we were unable to recover it. 00:27:18.034 [2024-11-20 07:23:22.296920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.034 [2024-11-20 07:23:22.296971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 07:23:22.297158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 07:23:22.297191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 07:23:22.297430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 07:23:22.297463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 07:23:22.297719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 07:23:22.297751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 07:23:22.297925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 07:23:22.297969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 07:23:22.298152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 07:23:22.298185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 07:23:22.298358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 07:23:22.298390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 07:23:22.298583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 07:23:22.298614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 07:23:22.298854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 07:23:22.298887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 07:23:22.299096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 07:23:22.299129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 07:23:22.299425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 07:23:22.299457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 07:23:22.299721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 07:23:22.299761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 07:23:22.300027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 07:23:22.300062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 07:23:22.300271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 07:23:22.300304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 07:23:22.300493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 07:23:22.300525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 07:23:22.300778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 07:23:22.300809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 07:23:22.300941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 07:23:22.300998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 07:23:22.301195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 07:23:22.301227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 07:23:22.301467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 07:23:22.301499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 07:23:22.301812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 07:23:22.301858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 07:23:22.302144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 07:23:22.302180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 07:23:22.302414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 07:23:22.302446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 07:23:22.302704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 07:23:22.302742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 07:23:22.302984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 07:23:22.303019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 07:23:22.303190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 07:23:22.303222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 07:23:22.303486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 07:23:22.303519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 07:23:22.303727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 07:23:22.303759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.035 qpair failed and we were unable to recover it. 00:27:18.035 [2024-11-20 07:23:22.304045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.035 [2024-11-20 07:23:22.304078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 07:23:22.304320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 07:23:22.304353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 07:23:22.304601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 07:23:22.304634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 07:23:22.304824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 07:23:22.304855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 07:23:22.305121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 07:23:22.305155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 07:23:22.305327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 07:23:22.305360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 07:23:22.305625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 07:23:22.305663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 07:23:22.305905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 07:23:22.305937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 07:23:22.306195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 07:23:22.306228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 07:23:22.306349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 07:23:22.306381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 07:23:22.306554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 07:23:22.306586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 07:23:22.306811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 07:23:22.306843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 07:23:22.306976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 07:23:22.307009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 07:23:22.307274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 07:23:22.307305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 07:23:22.307521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 07:23:22.307553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 07:23:22.307792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 07:23:22.307825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 07:23:22.308081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 07:23:22.308115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 07:23:22.308335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 07:23:22.308368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 07:23:22.308487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 07:23:22.308519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 07:23:22.308729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 07:23:22.308765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 07:23:22.309040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 07:23:22.309078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 07:23:22.309333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 07:23:22.309366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 07:23:22.309604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 07:23:22.309637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 07:23:22.309759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 07:23:22.309792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 07:23:22.310060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 07:23:22.310096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 07:23:22.310384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 07:23:22.310418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 07:23:22.310605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 07:23:22.310639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 07:23:22.310782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 07:23:22.310815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 07:23:22.311082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 07:23:22.311118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 07:23:22.311248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 07:23:22.311280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 07:23:22.311409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.036 [2024-11-20 07:23:22.311441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.036 qpair failed and we were unable to recover it. 00:27:18.036 [2024-11-20 07:23:22.311476] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:18.037 [2024-11-20 07:23:22.311507] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:18.037 [2024-11-20 07:23:22.311514] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:18.037 [2024-11-20 07:23:22.311521] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:18.037 [2024-11-20 07:23:22.311526] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:18.037 [2024-11-20 07:23:22.311655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 07:23:22.311688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 07:23:22.311930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 07:23:22.311973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 07:23:22.312153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 07:23:22.312186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 07:23:22.312393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 07:23:22.312425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 07:23:22.312686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 07:23:22.312717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 07:23:22.312996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 07:23:22.313031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 07:23:22.313149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:18.037 [2024-11-20 07:23:22.313309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 07:23:22.313256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:18.037 [2024-11-20 07:23:22.313342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 07:23:22.313362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:18.037 [2024-11-20 07:23:22.313363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:18.037 [2024-11-20 07:23:22.313559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 07:23:22.313591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 07:23:22.313763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 07:23:22.313795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 07:23:22.313984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 07:23:22.314017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 07:23:22.314260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 07:23:22.314293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 07:23:22.314480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 07:23:22.314512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 07:23:22.314730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 07:23:22.314782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 07:23:22.315032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 07:23:22.315069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 07:23:22.315347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 07:23:22.315380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 07:23:22.315650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 07:23:22.315684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 07:23:22.315867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 07:23:22.315899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 07:23:22.316098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 07:23:22.316131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 07:23:22.316369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 07:23:22.316402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 07:23:22.316593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 07:23:22.316625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 07:23:22.316865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 07:23:22.316897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 07:23:22.317097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 07:23:22.317132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 07:23:22.317343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 07:23:22.317375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 07:23:22.317634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 07:23:22.317666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 07:23:22.317858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 07:23:22.317891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 07:23:22.318156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 07:23:22.318189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 07:23:22.318467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 07:23:22.318500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 07:23:22.318678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 07:23:22.318709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.037 [2024-11-20 07:23:22.318974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.037 [2024-11-20 07:23:22.319008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.037 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 07:23:22.319249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 07:23:22.319281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 07:23:22.319572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 07:23:22.319605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 07:23:22.319871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 07:23:22.319904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 07:23:22.320061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 07:23:22.320094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 07:23:22.320363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 07:23:22.320395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 07:23:22.320662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 07:23:22.320695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 07:23:22.320966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 07:23:22.321001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 07:23:22.321293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 07:23:22.321325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 07:23:22.321502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 07:23:22.321533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 07:23:22.321660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 07:23:22.321693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 07:23:22.321972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 07:23:22.322012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 07:23:22.322133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 07:23:22.322167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 07:23:22.322409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 07:23:22.322441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 07:23:22.322701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 07:23:22.322733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 07:23:22.323022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 07:23:22.323055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 07:23:22.323322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 07:23:22.323354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 07:23:22.323599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 07:23:22.323631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 07:23:22.323920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 07:23:22.323960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 07:23:22.324216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 07:23:22.324250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 07:23:22.324501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 07:23:22.324533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 07:23:22.324819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 07:23:22.324851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 07:23:22.325119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 07:23:22.325154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 07:23:22.325444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 07:23:22.325476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 07:23:22.325710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 07:23:22.325743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 07:23:22.325993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 07:23:22.326028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 07:23:22.326243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 07:23:22.326276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 07:23:22.326468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 07:23:22.326501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 07:23:22.326757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 07:23:22.326789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 07:23:22.327052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 07:23:22.327088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 07:23:22.327207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 07:23:22.327240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 07:23:22.327482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.038 [2024-11-20 07:23:22.327515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.038 qpair failed and we were unable to recover it. 00:27:18.038 [2024-11-20 07:23:22.327782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 07:23:22.327814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 07:23:22.328077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 07:23:22.328116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 07:23:22.328403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 07:23:22.328438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 07:23:22.328708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 07:23:22.328741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 07:23:22.328936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 07:23:22.328986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 07:23:22.329233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 07:23:22.329268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 07:23:22.329509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 07:23:22.329550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 07:23:22.329838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 07:23:22.329873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 07:23:22.330133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 07:23:22.330168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 07:23:22.330405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 07:23:22.330439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 07:23:22.330656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 07:23:22.330689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 07:23:22.330928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 07:23:22.330975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 07:23:22.331216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 07:23:22.331249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 07:23:22.331510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 07:23:22.331543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 07:23:22.331806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 07:23:22.331838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 07:23:22.332117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 07:23:22.332151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 07:23:22.332289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 07:23:22.332323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 07:23:22.332563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 07:23:22.332598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 07:23:22.332895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 07:23:22.332929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 07:23:22.333192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 07:23:22.333225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 07:23:22.333434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 07:23:22.333467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 07:23:22.333690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 07:23:22.333724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 07:23:22.333964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 07:23:22.333999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 07:23:22.334254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 07:23:22.334287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 07:23:22.334488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 07:23:22.334522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 07:23:22.334718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 07:23:22.334752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 07:23:22.335044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 07:23:22.335079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 07:23:22.335366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 07:23:22.335400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 07:23:22.335592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 07:23:22.335625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 07:23:22.335883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 07:23:22.335918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 07:23:22.336151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 07:23:22.336207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.039 [2024-11-20 07:23:22.336400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.039 [2024-11-20 07:23:22.336435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.039 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 07:23:22.336696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 07:23:22.336729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 07:23:22.336958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 07:23:22.337002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 07:23:22.337208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 07:23:22.337242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 07:23:22.337507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 07:23:22.337541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 07:23:22.337734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 07:23:22.337766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 07:23:22.337940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 07:23:22.337984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 07:23:22.338225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 07:23:22.338258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 07:23:22.338495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 07:23:22.338527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 07:23:22.338796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 07:23:22.338829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 07:23:22.339103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 07:23:22.339139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 07:23:22.339417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 07:23:22.339450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 07:23:22.339726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 07:23:22.339760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 07:23:22.339876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 07:23:22.339908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 07:23:22.340186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 07:23:22.340222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 07:23:22.340413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 07:23:22.340446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 07:23:22.340698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 07:23:22.340730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 07:23:22.340916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 07:23:22.340956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 07:23:22.341191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 07:23:22.341224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 07:23:22.341423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 07:23:22.341456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 07:23:22.341700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 07:23:22.341733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 07:23:22.341979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 07:23:22.342015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 07:23:22.342282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 07:23:22.342314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 07:23:22.342601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 07:23:22.342637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 07:23:22.342825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 07:23:22.342858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 07:23:22.343146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 07:23:22.343187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 07:23:22.343382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 07:23:22.343421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 07:23:22.343690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 07:23:22.343726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 07:23:22.343984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 07:23:22.344020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 07:23:22.344162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 07:23:22.344218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 07:23:22.344507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 07:23:22.344541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 07:23:22.344724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.040 [2024-11-20 07:23:22.344757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.040 qpair failed and we were unable to recover it. 00:27:18.040 [2024-11-20 07:23:22.345022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 07:23:22.345056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 07:23:22.345344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 07:23:22.345376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 07:23:22.345606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 07:23:22.345638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 07:23:22.345822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 07:23:22.345854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 07:23:22.346141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 07:23:22.346174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 07:23:22.346443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 07:23:22.346475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 07:23:22.346747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 07:23:22.346778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 07:23:22.346993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 07:23:22.347026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 07:23:22.347307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 07:23:22.347339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 07:23:22.347605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 07:23:22.347637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 07:23:22.347916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 07:23:22.347957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 07:23:22.348230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 07:23:22.348262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 07:23:22.348540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 07:23:22.348572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 07:23:22.348851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 07:23:22.348884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 07:23:22.349165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 07:23:22.349200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 07:23:22.349476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 07:23:22.349511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 07:23:22.349739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 07:23:22.349774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 07:23:22.349969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 07:23:22.350004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 07:23:22.350129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 07:23:22.350162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 07:23:22.350354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 07:23:22.350386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 07:23:22.350534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 07:23:22.350570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 07:23:22.350859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 07:23:22.350891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 07:23:22.351160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 07:23:22.351194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 07:23:22.351480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 07:23:22.351511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 07:23:22.351764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 07:23:22.351803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 07:23:22.352052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 07:23:22.352086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 07:23:22.352326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 07:23:22.352356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.041 [2024-11-20 07:23:22.352606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.041 [2024-11-20 07:23:22.352638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.041 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 07:23:22.352897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 07:23:22.352928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 07:23:22.353220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 07:23:22.353251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 07:23:22.353431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 07:23:22.353462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 07:23:22.353646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 07:23:22.353677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 07:23:22.353941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 07:23:22.353990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 07:23:22.354224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 07:23:22.354256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 07:23:22.354449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 07:23:22.354482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 07:23:22.354674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 07:23:22.354708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 07:23:22.354961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 07:23:22.354998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 07:23:22.355243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 07:23:22.355279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 07:23:22.355565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 07:23:22.355602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 07:23:22.355894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 07:23:22.355928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 07:23:22.356135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 07:23:22.356170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 07:23:22.356438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 07:23:22.356471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 07:23:22.356745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 07:23:22.356777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 07:23:22.357065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 07:23:22.357100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 07:23:22.357365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 07:23:22.357399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 07:23:22.357627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 07:23:22.357661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 07:23:22.357958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 07:23:22.357992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 07:23:22.358184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 07:23:22.358215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 07:23:22.358451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 07:23:22.358484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 07:23:22.358691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 07:23:22.358724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 07:23:22.358995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 07:23:22.359027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 07:23:22.359265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 07:23:22.359304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 07:23:22.359492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 07:23:22.359525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 07:23:22.359709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 07:23:22.359741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 07:23:22.360007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 07:23:22.360041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 07:23:22.360258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 07:23:22.360291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 07:23:22.360576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 07:23:22.360608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 07:23:22.360875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 07:23:22.360907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 07:23:22.361194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 07:23:22.361229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.042 qpair failed and we were unable to recover it. 00:27:18.042 [2024-11-20 07:23:22.361449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.042 [2024-11-20 07:23:22.361481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 07:23:22.361657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 07:23:22.361689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 07:23:22.361862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 07:23:22.361895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 07:23:22.362031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 07:23:22.362064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 07:23:22.362315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 07:23:22.362347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 07:23:22.362489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 07:23:22.362522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 07:23:22.362747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 07:23:22.362779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 07:23:22.363019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 07:23:22.363053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 07:23:22.363178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 07:23:22.363209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 07:23:22.363343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 07:23:22.363375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 07:23:22.363612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 07:23:22.363644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 07:23:22.363933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 07:23:22.363973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 07:23:22.364235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 07:23:22.364269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 07:23:22.364482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 07:23:22.364514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 07:23:22.364805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 07:23:22.364836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 07:23:22.365033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 07:23:22.365066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 07:23:22.365354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 07:23:22.365386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 07:23:22.365655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 07:23:22.365688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 07:23:22.365903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 07:23:22.365935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 07:23:22.366215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 07:23:22.366255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 07:23:22.366498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 07:23:22.366530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 07:23:22.366781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 07:23:22.366813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 07:23:22.366988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 07:23:22.367022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 07:23:22.367262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 07:23:22.367294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 07:23:22.367580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 07:23:22.367613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 07:23:22.367793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 07:23:22.367828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 07:23:22.368091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 07:23:22.368126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 07:23:22.368437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 07:23:22.368470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 07:23:22.368641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 07:23:22.368673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 07:23:22.368981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 07:23:22.369015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 07:23:22.369295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 07:23:22.369327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 07:23:22.369519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.043 [2024-11-20 07:23:22.369551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.043 qpair failed and we were unable to recover it. 00:27:18.043 [2024-11-20 07:23:22.369806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 07:23:22.369838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 07:23:22.370134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 07:23:22.370168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 07:23:22.370350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 07:23:22.370383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 07:23:22.370627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 07:23:22.370659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 07:23:22.370901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 07:23:22.370934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 07:23:22.371168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 07:23:22.371201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 07:23:22.371447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 07:23:22.371479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 07:23:22.371717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 07:23:22.371748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 07:23:22.372037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 07:23:22.372071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 07:23:22.372338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 07:23:22.372373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 07:23:22.372632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 07:23:22.372665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 07:23:22.372966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 07:23:22.373000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 07:23:22.373223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 07:23:22.373254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 07:23:22.373551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 07:23:22.373582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 07:23:22.373846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 07:23:22.373878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 07:23:22.374219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 07:23:22.374253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 07:23:22.374445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 07:23:22.374476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 07:23:22.374610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 07:23:22.374641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 07:23:22.374902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 07:23:22.374934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 07:23:22.375183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 07:23:22.375215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 07:23:22.375398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 07:23:22.375430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 07:23:22.375713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 07:23:22.375745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 07:23:22.376011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 07:23:22.376045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 07:23:22.376237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 07:23:22.376269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 07:23:22.376529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 07:23:22.376560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 07:23:22.376746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 07:23:22.376777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 07:23:22.376905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 07:23:22.376936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 07:23:22.377076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 07:23:22.377106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 07:23:22.377444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 07:23:22.377514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 07:23:22.377718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 07:23:22.377751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 07:23:22.378021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 07:23:22.378056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 07:23:22.378340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.044 [2024-11-20 07:23:22.378372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.044 qpair failed and we were unable to recover it. 00:27:18.044 [2024-11-20 07:23:22.378661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 07:23:22.378692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 07:23:22.378878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 07:23:22.378909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 07:23:22.379193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 07:23:22.379226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 07:23:22.379399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 07:23:22.379430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 07:23:22.379667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 07:23:22.379698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 07:23:22.379968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 07:23:22.380002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 07:23:22.380207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 07:23:22.380237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 07:23:22.380422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 07:23:22.380454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 07:23:22.380715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 07:23:22.380746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 07:23:22.380984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 07:23:22.381028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 07:23:22.381318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 07:23:22.381350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 07:23:22.381546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 07:23:22.381577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 07:23:22.381769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 07:23:22.381800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 07:23:22.382039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 07:23:22.382072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 07:23:22.382208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 07:23:22.382239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 07:23:22.382499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 07:23:22.382531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 07:23:22.382820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 07:23:22.382851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 07:23:22.383125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 07:23:22.383158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 07:23:22.383404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 07:23:22.383436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 07:23:22.383624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 07:23:22.383655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 07:23:22.383894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 07:23:22.383925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 07:23:22.384220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 07:23:22.384253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 07:23:22.384512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 07:23:22.384543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 07:23:22.384815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 07:23:22.384846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 07:23:22.385034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 07:23:22.385068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 07:23:22.385335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 07:23:22.385366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 07:23:22.385572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 07:23:22.385603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 07:23:22.385844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 07:23:22.385876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 07:23:22.386062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 07:23:22.386095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 07:23:22.386234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 07:23:22.386266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 07:23:22.386451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 07:23:22.386482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.045 [2024-11-20 07:23:22.386665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.045 [2024-11-20 07:23:22.386696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.045 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 07:23:22.386938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 07:23:22.386979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 07:23:22.387228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 07:23:22.387259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 07:23:22.387516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 07:23:22.387547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 07:23:22.387837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 07:23:22.387869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 07:23:22.388093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 07:23:22.388143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 07:23:22.388290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 07:23:22.388323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 07:23:22.388562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 07:23:22.388594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 07:23:22.388862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 07:23:22.388895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 07:23:22.389096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 07:23:22.389129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 07:23:22.389312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 07:23:22.389345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 07:23:22.389606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 07:23:22.389637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 07:23:22.389821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 07:23:22.389853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 07:23:22.390093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 07:23:22.390127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 07:23:22.390388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 07:23:22.390420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 07:23:22.390594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 07:23:22.390626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 07:23:22.390807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 07:23:22.390839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 07:23:22.391049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 07:23:22.391083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 07:23:22.391351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 07:23:22.391383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 07:23:22.391688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 07:23:22.391721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 07:23:22.391978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 07:23:22.392012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 07:23:22.392306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 07:23:22.392338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 07:23:22.392591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 07:23:22.392623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 07:23:22.392911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 07:23:22.392944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 07:23:22.393174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 07:23:22.393207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 07:23:22.393449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 07:23:22.393481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 07:23:22.393744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 07:23:22.393776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 07:23:22.394063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.046 [2024-11-20 07:23:22.394097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.046 qpair failed and we were unable to recover it. 00:27:18.046 [2024-11-20 07:23:22.394229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 07:23:22.394262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 07:23:22.394550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 07:23:22.394583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 07:23:22.394824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 07:23:22.394856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 07:23:22.395121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 07:23:22.395154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 07:23:22.395404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 07:23:22.395436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 07:23:22.395705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 07:23:22.395737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 07:23:22.395942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 07:23:22.395997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 07:23:22.396265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 07:23:22.396297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 07:23:22.396502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 07:23:22.396535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 07:23:22.396718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 07:23:22.396750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 07:23:22.396933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 07:23:22.396973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 07:23:22.397241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 07:23:22.397273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 07:23:22.397508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 07:23:22.397540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 07:23:22.397807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 07:23:22.397839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 07:23:22.398086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 07:23:22.398119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 07:23:22.398379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 07:23:22.398411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 07:23:22.398596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 07:23:22.398628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 07:23:22.398839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 07:23:22.398877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 07:23:22.399072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 07:23:22.399106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 07:23:22.399238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 07:23:22.399270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 07:23:22.399484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 07:23:22.399516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 07:23:22.399777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 07:23:22.399809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 07:23:22.400061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 07:23:22.400095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 07:23:22.400282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 07:23:22.400314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 07:23:22.400600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 07:23:22.400633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 07:23:22.400905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 07:23:22.400938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 07:23:22.401157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 07:23:22.401190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 07:23:22.401429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 07:23:22.401461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 07:23:22.401639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 07:23:22.401671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 07:23:22.401967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 07:23:22.402000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 07:23:22.402188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.047 [2024-11-20 07:23:22.402220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.047 qpair failed and we were unable to recover it. 00:27:18.047 [2024-11-20 07:23:22.402432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 07:23:22.402465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 07:23:22.402594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 07:23:22.402626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 07:23:22.402812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 07:23:22.402844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 07:23:22.403029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 07:23:22.403064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 07:23:22.403328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 07:23:22.403361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 07:23:22.403546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 07:23:22.403577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 07:23:22.403838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 07:23:22.403869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 07:23:22.404160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 07:23:22.404194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 07:23:22.404390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 07:23:22.404422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 07:23:22.404656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 07:23:22.404688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 07:23:22.404872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 07:23:22.404905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 07:23:22.405149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 07:23:22.405182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 07:23:22.405309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 07:23:22.405341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 07:23:22.405613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 07:23:22.405646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 07:23:22.405839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 07:23:22.405871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 07:23:22.406133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 07:23:22.406167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 07:23:22.406430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 07:23:22.406462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 07:23:22.406702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 07:23:22.406734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 07:23:22.406996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 07:23:22.407030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 07:23:22.407321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 07:23:22.407354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 07:23:22.407559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 07:23:22.407590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 07:23:22.407731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 07:23:22.407762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 07:23:22.407958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 07:23:22.407992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 07:23:22.408254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 07:23:22.408285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 07:23:22.408462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 07:23:22.408494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 07:23:22.408663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 07:23:22.408695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 07:23:22.408829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 07:23:22.408867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 07:23:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:18.048 [2024-11-20 07:23:22.409068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 07:23:22.409102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 [2024-11-20 07:23:22.409289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 07:23:22.409322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 07:23:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:27:18.048 [2024-11-20 07:23:22.409587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.048 [2024-11-20 07:23:22.409619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.048 qpair failed and we were unable to recover it. 00:27:18.048 07:23:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:18.048 [2024-11-20 07:23:22.409908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 07:23:22.409941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 07:23:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:18.049 [2024-11-20 07:23:22.410231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 07:23:22.410264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 07:23:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:18.049 [2024-11-20 07:23:22.410502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 07:23:22.410534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 07:23:22.410773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 07:23:22.410805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 07:23:22.411045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 07:23:22.411080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 07:23:22.411344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 07:23:22.411377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 07:23:22.411667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 07:23:22.411699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 07:23:22.411877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 07:23:22.411915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 07:23:22.412077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 07:23:22.412118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 07:23:22.412374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 07:23:22.412409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 07:23:22.412681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 07:23:22.412712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 07:23:22.412862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 07:23:22.412894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 07:23:22.413126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 07:23:22.413160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 07:23:22.413362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 07:23:22.413393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 07:23:22.413655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 07:23:22.413686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 07:23:22.413869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 07:23:22.413901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 07:23:22.414176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 07:23:22.414210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 07:23:22.414404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 07:23:22.414436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 07:23:22.414702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 07:23:22.414734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 07:23:22.414906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 07:23:22.414938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 07:23:22.415128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 07:23:22.415159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 07:23:22.415406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 07:23:22.415438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 07:23:22.415634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 07:23:22.415666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 07:23:22.415839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 07:23:22.415871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 07:23:22.416067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 07:23:22.416100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 07:23:22.416282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 07:23:22.416314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 07:23:22.416581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 07:23:22.416612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 07:23:22.416799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 07:23:22.416831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 07:23:22.417076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 07:23:22.417111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.049 [2024-11-20 07:23:22.417374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.049 [2024-11-20 07:23:22.417405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.049 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 07:23:22.417642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 07:23:22.417675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 07:23:22.417873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 07:23:22.417906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 07:23:22.418159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 07:23:22.418192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 07:23:22.418454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 07:23:22.418486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 07:23:22.418783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 07:23:22.418814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 07:23:22.419080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 07:23:22.419114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 07:23:22.419263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 07:23:22.419294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 07:23:22.419505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 07:23:22.419537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 07:23:22.419723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 07:23:22.419754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 07:23:22.420012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 07:23:22.420045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 07:23:22.420193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 07:23:22.420225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 07:23:22.420396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 07:23:22.420427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 07:23:22.420564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 07:23:22.420596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 07:23:22.420851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 07:23:22.420883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 07:23:22.421169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 07:23:22.421203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 07:23:22.421409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 07:23:22.421443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 07:23:22.421657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 07:23:22.421688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 07:23:22.421929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 07:23:22.421970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 07:23:22.422265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 07:23:22.422307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 07:23:22.422510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 07:23:22.422543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 07:23:22.422735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 07:23:22.422768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 07:23:22.422966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 07:23:22.423001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 07:23:22.423176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 07:23:22.423209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 07:23:22.423486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 07:23:22.423518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 07:23:22.423788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 07:23:22.423820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 07:23:22.424011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 07:23:22.424044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 07:23:22.424235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 07:23:22.424268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 07:23:22.424475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 07:23:22.424506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 07:23:22.424746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 07:23:22.424778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 07:23:22.424992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 07:23:22.425024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 07:23:22.425255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.050 [2024-11-20 07:23:22.425287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.050 qpair failed and we were unable to recover it. 00:27:18.050 [2024-11-20 07:23:22.425526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 07:23:22.425566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 07:23:22.425748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 07:23:22.425780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 07:23:22.426067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 07:23:22.426099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 07:23:22.426344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 07:23:22.426376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 07:23:22.426514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 07:23:22.426545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 07:23:22.426782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 07:23:22.426814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 07:23:22.427113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 07:23:22.427146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 07:23:22.427339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 07:23:22.427372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 07:23:22.427618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 07:23:22.427649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 07:23:22.427847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 07:23:22.427879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 07:23:22.428153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 07:23:22.428186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 07:23:22.428402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 07:23:22.428435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 07:23:22.428625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 07:23:22.428658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 07:23:22.428905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 07:23:22.428936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 07:23:22.429177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 07:23:22.429210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 07:23:22.429398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 07:23:22.429432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 07:23:22.429556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 07:23:22.429590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 07:23:22.429852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 07:23:22.429884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 07:23:22.430084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 07:23:22.430118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 07:23:22.430370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 07:23:22.430402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 07:23:22.430731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 07:23:22.430763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 07:23:22.430994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 07:23:22.431028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 07:23:22.431248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 07:23:22.431281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 07:23:22.431523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 07:23:22.431556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 07:23:22.431753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 07:23:22.431785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 07:23:22.431910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 07:23:22.431943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 07:23:22.432218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 07:23:22.432250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 07:23:22.432463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 07:23:22.432516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 07:23:22.432837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 07:23:22.432871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 07:23:22.433107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 07:23:22.433141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 07:23:22.433382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 07:23:22.433413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.051 qpair failed and we were unable to recover it. 00:27:18.051 [2024-11-20 07:23:22.433656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.051 [2024-11-20 07:23:22.433688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 07:23:22.433882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 07:23:22.433913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 07:23:22.434061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 07:23:22.434095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 07:23:22.434241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 07:23:22.434273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 07:23:22.434457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 07:23:22.434490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 07:23:22.434750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 07:23:22.434783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 07:23:22.434969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 07:23:22.435002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 07:23:22.435204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 07:23:22.435237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 07:23:22.435492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 07:23:22.435525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 07:23:22.435657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 07:23:22.435690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 07:23:22.435943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 07:23:22.435985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 07:23:22.436135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 07:23:22.436168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 07:23:22.436406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 07:23:22.436438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 07:23:22.436665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 07:23:22.436697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 07:23:22.436956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 07:23:22.436990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 07:23:22.437186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 07:23:22.437218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 07:23:22.437488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 07:23:22.437520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 07:23:22.437716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 07:23:22.437748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 07:23:22.438047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 07:23:22.438081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 07:23:22.438276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 07:23:22.438309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 07:23:22.438502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 07:23:22.438534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 07:23:22.438730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 07:23:22.438762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 07:23:22.439031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 07:23:22.439065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 07:23:22.439194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 07:23:22.439226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 07:23:22.439490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 07:23:22.439522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 07:23:22.439653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 07:23:22.439685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 07:23:22.439962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 07:23:22.439996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.052 [2024-11-20 07:23:22.440169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.052 [2024-11-20 07:23:22.440202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.052 qpair failed and we were unable to recover it. 00:27:18.053 [2024-11-20 07:23:22.440382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.053 [2024-11-20 07:23:22.440414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.053 qpair failed and we were unable to recover it. 00:27:18.053 [2024-11-20 07:23:22.440697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.053 [2024-11-20 07:23:22.440730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.053 qpair failed and we were unable to recover it. 00:27:18.053 [2024-11-20 07:23:22.440998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.053 [2024-11-20 07:23:22.441032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.053 qpair failed and we were unable to recover it. 00:27:18.053 [2024-11-20 07:23:22.441159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.053 [2024-11-20 07:23:22.441192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.053 qpair failed and we were unable to recover it. 00:27:18.053 [2024-11-20 07:23:22.441321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.053 [2024-11-20 07:23:22.441355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.053 qpair failed and we were unable to recover it. 00:27:18.053 [2024-11-20 07:23:22.441542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.053 [2024-11-20 07:23:22.441575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.053 qpair failed and we were unable to recover it. 00:27:18.053 [2024-11-20 07:23:22.441836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.053 [2024-11-20 07:23:22.441869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.053 qpair failed and we were unable to recover it. 00:27:18.053 [2024-11-20 07:23:22.441993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.053 [2024-11-20 07:23:22.442026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.053 qpair failed and we were unable to recover it. 00:27:18.053 [2024-11-20 07:23:22.442161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.053 [2024-11-20 07:23:22.442199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.053 qpair failed and we were unable to recover it. 00:27:18.053 [2024-11-20 07:23:22.442316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.053 [2024-11-20 07:23:22.442350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.053 qpair failed and we were unable to recover it. 00:27:18.053 [2024-11-20 07:23:22.442463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.053 [2024-11-20 07:23:22.442495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.053 qpair failed and we were unable to recover it. 00:27:18.053 [2024-11-20 07:23:22.442687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.053 [2024-11-20 07:23:22.442720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.053 qpair failed and we were unable to recover it. 00:27:18.053 [2024-11-20 07:23:22.442985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.053 [2024-11-20 07:23:22.443019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.053 qpair failed and we were unable to recover it. 00:27:18.053 [2024-11-20 07:23:22.443146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.053 [2024-11-20 07:23:22.443178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.053 qpair failed and we were unable to recover it. 00:27:18.053 [2024-11-20 07:23:22.443303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.053 [2024-11-20 07:23:22.443335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.053 qpair failed and we were unable to recover it. 00:27:18.053 [2024-11-20 07:23:22.443538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.053 [2024-11-20 07:23:22.443570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.053 qpair failed and we were unable to recover it. 00:27:18.053 [2024-11-20 07:23:22.443754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.053 [2024-11-20 07:23:22.443786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.053 qpair failed and we were unable to recover it. 00:27:18.053 [2024-11-20 07:23:22.444075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.053 [2024-11-20 07:23:22.444108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.053 qpair failed and we were unable to recover it. 00:27:18.053 [2024-11-20 07:23:22.444234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.053 [2024-11-20 07:23:22.444266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.053 qpair failed and we were unable to recover it. 00:27:18.053 [2024-11-20 07:23:22.444530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.053 [2024-11-20 07:23:22.444562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.053 qpair failed and we were unable to recover it. 00:27:18.053 [2024-11-20 07:23:22.444672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.053 [2024-11-20 07:23:22.444704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.053 qpair failed and we were unable to recover it. 00:27:18.053 [2024-11-20 07:23:22.444967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.053 [2024-11-20 07:23:22.445001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.053 qpair failed and we were unable to recover it. 00:27:18.053 [2024-11-20 07:23:22.445193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.053 [2024-11-20 07:23:22.445226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.053 qpair failed and we were unable to recover it. 00:27:18.053 [2024-11-20 07:23:22.445409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.053 [2024-11-20 07:23:22.445441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.053 qpair failed and we were unable to recover it. 00:27:18.053 [2024-11-20 07:23:22.445718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.053 [2024-11-20 07:23:22.445750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.053 qpair failed and we were unable to recover it. 00:27:18.053 [2024-11-20 07:23:22.445941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.053 [2024-11-20 07:23:22.445984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.053 qpair failed and we were unable to recover it. 00:27:18.053 07:23:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:18.053 [2024-11-20 07:23:22.446115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.053 [2024-11-20 07:23:22.446149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.053 qpair failed and we were unable to recover it. 00:27:18.053 [2024-11-20 07:23:22.446412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.053 07:23:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:18.053 [2024-11-20 07:23:22.446444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.053 qpair failed and we were unable to recover it. 00:27:18.053 [2024-11-20 07:23:22.446569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.053 [2024-11-20 07:23:22.446601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.053 qpair failed and we were unable to recover it. 00:27:18.053 07:23:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.053 [2024-11-20 07:23:22.446786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.053 [2024-11-20 07:23:22.446819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.053 qpair failed and we were unable to recover it. 00:27:18.054 07:23:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:18.054 [2024-11-20 07:23:22.447070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.054 [2024-11-20 07:23:22.447107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.054 qpair failed and we were unable to recover it. 00:27:18.054 [2024-11-20 07:23:22.447287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.054 [2024-11-20 07:23:22.447320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.054 qpair failed and we were unable to recover it. 00:27:18.054 [2024-11-20 07:23:22.447444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.054 [2024-11-20 07:23:22.447476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.054 qpair failed and we were unable to recover it. 00:27:18.054 [2024-11-20 07:23:22.447818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.054 [2024-11-20 07:23:22.447851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.054 qpair failed and we were unable to recover it. 00:27:18.054 [2024-11-20 07:23:22.448116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.054 [2024-11-20 07:23:22.448151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.054 qpair failed and we were unable to recover it. 00:27:18.054 [2024-11-20 07:23:22.448337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.054 [2024-11-20 07:23:22.448368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.054 qpair failed and we were unable to recover it. 00:27:18.054 [2024-11-20 07:23:22.448602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.054 [2024-11-20 07:23:22.448634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.054 qpair failed and we were unable to recover it. 00:27:18.054 [2024-11-20 07:23:22.448893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.054 [2024-11-20 07:23:22.448926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.054 qpair failed and we were unable to recover it. 00:27:18.054 [2024-11-20 07:23:22.449111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.054 [2024-11-20 07:23:22.449144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.054 qpair failed and we were unable to recover it. 00:27:18.054 [2024-11-20 07:23:22.449391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.054 [2024-11-20 07:23:22.449423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.054 qpair failed and we were unable to recover it. 00:27:18.054 [2024-11-20 07:23:22.449687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.054 [2024-11-20 07:23:22.449719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.054 qpair failed and we were unable to recover it. 00:27:18.054 [2024-11-20 07:23:22.450014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.054 [2024-11-20 07:23:22.450047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.054 qpair failed and we were unable to recover it. 00:27:18.054 [2024-11-20 07:23:22.450262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.054 [2024-11-20 07:23:22.450293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.054 qpair failed and we were unable to recover it. 00:27:18.054 [2024-11-20 07:23:22.450581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.054 [2024-11-20 07:23:22.450613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.054 qpair failed and we were unable to recover it. 00:27:18.054 [2024-11-20 07:23:22.450886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.054 [2024-11-20 07:23:22.450918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.054 qpair failed and we were unable to recover it. 00:27:18.054 [2024-11-20 07:23:22.451134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.054 [2024-11-20 07:23:22.451175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.054 qpair failed and we were unable to recover it. 00:27:18.054 [2024-11-20 07:23:22.451470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.054 [2024-11-20 07:23:22.451510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.054 qpair failed and we were unable to recover it. 00:27:18.054 [2024-11-20 07:23:22.451777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.054 [2024-11-20 07:23:22.451809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.054 qpair failed and we were unable to recover it. 00:27:18.054 [2024-11-20 07:23:22.452031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.054 [2024-11-20 07:23:22.452065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.054 qpair failed and we were unable to recover it. 00:27:18.054 [2024-11-20 07:23:22.452240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.054 [2024-11-20 07:23:22.452272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.054 qpair failed and we were unable to recover it. 00:27:18.054 [2024-11-20 07:23:22.452536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.054 [2024-11-20 07:23:22.452568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.054 qpair failed and we were unable to recover it. 00:27:18.054 [2024-11-20 07:23:22.452833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.054 [2024-11-20 07:23:22.452865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.054 qpair failed and we were unable to recover it. 00:27:18.054 [2024-11-20 07:23:22.453165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.054 [2024-11-20 07:23:22.453198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.054 qpair failed and we were unable to recover it. 00:27:18.054 [2024-11-20 07:23:22.453339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.054 [2024-11-20 07:23:22.453371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.054 qpair failed and we were unable to recover it. 00:27:18.054 [2024-11-20 07:23:22.453666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.054 [2024-11-20 07:23:22.453698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.054 qpair failed and we were unable to recover it. 00:27:18.054 [2024-11-20 07:23:22.453887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.054 [2024-11-20 07:23:22.453919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101bba0 with addr=10.0.0.2, port=4420 00:27:18.054 qpair failed and we were unable to recover it. 00:27:18.054 [2024-11-20 07:23:22.454202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.054 [2024-11-20 07:23:22.454244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.054 qpair failed and we were unable to recover it. 00:27:18.054 [2024-11-20 07:23:22.454446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.054 [2024-11-20 07:23:22.454482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.054 qpair failed and we were unable to recover it. 00:27:18.054 [2024-11-20 07:23:22.454695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.054 [2024-11-20 07:23:22.454729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.054 qpair failed and we were unable to recover it. 00:27:18.054 [2024-11-20 07:23:22.454992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.054 [2024-11-20 07:23:22.455025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.054 qpair failed and we were unable to recover it. 00:27:18.054 [2024-11-20 07:23:22.455284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.055 [2024-11-20 07:23:22.455317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.055 qpair failed and we were unable to recover it. 00:27:18.055 [2024-11-20 07:23:22.455451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.055 [2024-11-20 07:23:22.455483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.055 qpair failed and we were unable to recover it. 00:27:18.055 [2024-11-20 07:23:22.455745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.055 [2024-11-20 07:23:22.455777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.055 qpair failed and we were unable to recover it. 00:27:18.055 [2024-11-20 07:23:22.456021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.055 [2024-11-20 07:23:22.456055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.055 qpair failed and we were unable to recover it. 00:27:18.055 [2024-11-20 07:23:22.456250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.055 [2024-11-20 07:23:22.456282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.055 qpair failed and we were unable to recover it. 00:27:18.055 [2024-11-20 07:23:22.456466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.055 [2024-11-20 07:23:22.456498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.055 qpair failed and we were unable to recover it. 00:27:18.055 [2024-11-20 07:23:22.456774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.055 [2024-11-20 07:23:22.456807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.055 qpair failed and we were unable to recover it. 00:27:18.055 [2024-11-20 07:23:22.457095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.055 [2024-11-20 07:23:22.457128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.055 qpair failed and we were unable to recover it. 00:27:18.055 [2024-11-20 07:23:22.457366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.055 [2024-11-20 07:23:22.457398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.055 qpair failed and we were unable to recover it. 00:27:18.055 [2024-11-20 07:23:22.457710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.055 [2024-11-20 07:23:22.457742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.055 qpair failed and we were unable to recover it. 00:27:18.055 [2024-11-20 07:23:22.458018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.055 [2024-11-20 07:23:22.458052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.055 qpair failed and we were unable to recover it. 00:27:18.055 [2024-11-20 07:23:22.458313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.055 [2024-11-20 07:23:22.458344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.055 qpair failed and we were unable to recover it. 00:27:18.055 [2024-11-20 07:23:22.458634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.055 [2024-11-20 07:23:22.458666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.055 qpair failed and we were unable to recover it. 00:27:18.055 [2024-11-20 07:23:22.458961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.055 [2024-11-20 07:23:22.458998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.055 qpair failed and we were unable to recover it. 00:27:18.055 [2024-11-20 07:23:22.459144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.055 [2024-11-20 07:23:22.459176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.055 qpair failed and we were unable to recover it. 00:27:18.055 [2024-11-20 07:23:22.459319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.055 [2024-11-20 07:23:22.459350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.055 qpair failed and we were unable to recover it. 00:27:18.055 [2024-11-20 07:23:22.459623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.055 [2024-11-20 07:23:22.459655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.055 qpair failed and we were unable to recover it. 00:27:18.055 [2024-11-20 07:23:22.459911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.055 [2024-11-20 07:23:22.459943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.055 qpair failed and we were unable to recover it. 00:27:18.055 [2024-11-20 07:23:22.460098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.055 [2024-11-20 07:23:22.460129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.055 qpair failed and we were unable to recover it. 00:27:18.055 [2024-11-20 07:23:22.460389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.055 [2024-11-20 07:23:22.460421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.055 qpair failed and we were unable to recover it. 00:27:18.055 [2024-11-20 07:23:22.460665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.055 [2024-11-20 07:23:22.460697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.055 qpair failed and we were unable to recover it. 00:27:18.055 [2024-11-20 07:23:22.460970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.055 [2024-11-20 07:23:22.461005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.055 qpair failed and we were unable to recover it. 00:27:18.055 [2024-11-20 07:23:22.461198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.055 [2024-11-20 07:23:22.461230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.055 qpair failed and we were unable to recover it. 00:27:18.055 [2024-11-20 07:23:22.461412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.055 [2024-11-20 07:23:22.461443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.055 qpair failed and we were unable to recover it. 00:27:18.055 [2024-11-20 07:23:22.461690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.055 [2024-11-20 07:23:22.461722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.055 qpair failed and we were unable to recover it. 00:27:18.055 [2024-11-20 07:23:22.461989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.055 [2024-11-20 07:23:22.462021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.055 qpair failed and we were unable to recover it. 00:27:18.055 [2024-11-20 07:23:22.462231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.055 [2024-11-20 07:23:22.462270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.055 qpair failed and we were unable to recover it. 00:27:18.055 [2024-11-20 07:23:22.462401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.055 [2024-11-20 07:23:22.462433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.055 qpair failed and we were unable to recover it. 00:27:18.055 [2024-11-20 07:23:22.462615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.055 [2024-11-20 07:23:22.462647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.055 qpair failed and we were unable to recover it. 00:27:18.055 [2024-11-20 07:23:22.462819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-11-20 07:23:22.462850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-11-20 07:23:22.463051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-11-20 07:23:22.463085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-11-20 07:23:22.463346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-11-20 07:23:22.463377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-11-20 07:23:22.463582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-11-20 07:23:22.463614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-11-20 07:23:22.463757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-11-20 07:23:22.463788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-11-20 07:23:22.463965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-11-20 07:23:22.463997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-11-20 07:23:22.464118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-11-20 07:23:22.464150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-11-20 07:23:22.464286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-11-20 07:23:22.464318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-11-20 07:23:22.464571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-11-20 07:23:22.464603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-11-20 07:23:22.464869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-11-20 07:23:22.464900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-11-20 07:23:22.465101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-11-20 07:23:22.465133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-11-20 07:23:22.465264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-11-20 07:23:22.465296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-11-20 07:23:22.465496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-11-20 07:23:22.465528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-11-20 07:23:22.465717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-11-20 07:23:22.465749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-11-20 07:23:22.465918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-11-20 07:23:22.465958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-11-20 07:23:22.466198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-11-20 07:23:22.466229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-11-20 07:23:22.466441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-11-20 07:23:22.466472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-11-20 07:23:22.466741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-11-20 07:23:22.466773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-11-20 07:23:22.467019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-11-20 07:23:22.467051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-11-20 07:23:22.467229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-11-20 07:23:22.467261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-11-20 07:23:22.467449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-11-20 07:23:22.467480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-11-20 07:23:22.467668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-11-20 07:23:22.467700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-11-20 07:23:22.467906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-11-20 07:23:22.467937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-11-20 07:23:22.468136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-11-20 07:23:22.468168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-11-20 07:23:22.468382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-11-20 07:23:22.468422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-11-20 07:23:22.468696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-11-20 07:23:22.468728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-11-20 07:23:22.468990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-11-20 07:23:22.469023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-11-20 07:23:22.469271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-11-20 07:23:22.469303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-11-20 07:23:22.469442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.056 [2024-11-20 07:23:22.469474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.056 qpair failed and we were unable to recover it. 00:27:18.056 [2024-11-20 07:23:22.469747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-11-20 07:23:22.469779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-11-20 07:23:22.469965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-11-20 07:23:22.469997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-11-20 07:23:22.470180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-11-20 07:23:22.470212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-11-20 07:23:22.470400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-11-20 07:23:22.470432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-11-20 07:23:22.470684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-11-20 07:23:22.470717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-11-20 07:23:22.470900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-11-20 07:23:22.470933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-11-20 07:23:22.471132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-11-20 07:23:22.471165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-11-20 07:23:22.471349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-11-20 07:23:22.471381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-11-20 07:23:22.471658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-11-20 07:23:22.471699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-11-20 07:23:22.471959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-11-20 07:23:22.471993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-11-20 07:23:22.472134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-11-20 07:23:22.472166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-11-20 07:23:22.472435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-11-20 07:23:22.472468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-11-20 07:23:22.472654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-11-20 07:23:22.472688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-11-20 07:23:22.472959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-11-20 07:23:22.472995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-11-20 07:23:22.473239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-11-20 07:23:22.473272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-11-20 07:23:22.473469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-11-20 07:23:22.473502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-11-20 07:23:22.473695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-11-20 07:23:22.473728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-11-20 07:23:22.473913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-11-20 07:23:22.473955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-11-20 07:23:22.474138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-11-20 07:23:22.474171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-11-20 07:23:22.474361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-11-20 07:23:22.474394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-11-20 07:23:22.474633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-11-20 07:23:22.474669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-11-20 07:23:22.474937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-11-20 07:23:22.474981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-11-20 07:23:22.475257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-11-20 07:23:22.475291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-11-20 07:23:22.475477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-11-20 07:23:22.475510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-11-20 07:23:22.475773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-11-20 07:23:22.475808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.057 qpair failed and we were unable to recover it. 00:27:18.057 [2024-11-20 07:23:22.476077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.057 [2024-11-20 07:23:22.476113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-11-20 07:23:22.476379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-11-20 07:23:22.476413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-11-20 07:23:22.476701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-11-20 07:23:22.476734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-11-20 07:23:22.476865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-11-20 07:23:22.476897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-11-20 07:23:22.477096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-11-20 07:23:22.477130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-11-20 07:23:22.477399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-11-20 07:23:22.477432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-11-20 07:23:22.477618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-11-20 07:23:22.477650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-11-20 07:23:22.477912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-11-20 07:23:22.477944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-11-20 07:23:22.478161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-11-20 07:23:22.478194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-11-20 07:23:22.478379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-11-20 07:23:22.478412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e44000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-11-20 07:23:22.478552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-11-20 07:23:22.478598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-11-20 07:23:22.478840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-11-20 07:23:22.478872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-11-20 07:23:22.479114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-11-20 07:23:22.479148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-11-20 07:23:22.479404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-11-20 07:23:22.479437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-11-20 07:23:22.479615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-11-20 07:23:22.479646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-11-20 07:23:22.479884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-11-20 07:23:22.479915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-11-20 07:23:22.480212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-11-20 07:23:22.480248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-11-20 07:23:22.480510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-11-20 07:23:22.480542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 Malloc0 00:27:18.058 [2024-11-20 07:23:22.480786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-11-20 07:23:22.480817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-11-20 07:23:22.481078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-11-20 07:23:22.481112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-11-20 07:23:22.481296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 07:23:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.058 [2024-11-20 07:23:22.481328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-11-20 07:23:22.481511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-11-20 07:23:22.481543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 07:23:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:18.058 [2024-11-20 07:23:22.481726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-11-20 07:23:22.481758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-11-20 07:23:22.481959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-11-20 07:23:22.481993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 07:23:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.058 [2024-11-20 07:23:22.482257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-11-20 07:23:22.482290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 07:23:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:18.058 [2024-11-20 07:23:22.482557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.058 [2024-11-20 07:23:22.482588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.058 qpair failed and we were unable to recover it. 00:27:18.058 [2024-11-20 07:23:22.482854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-11-20 07:23:22.482885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-11-20 07:23:22.483158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-11-20 07:23:22.483191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-11-20 07:23:22.483364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-11-20 07:23:22.483396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-11-20 07:23:22.483569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-11-20 07:23:22.483600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-11-20 07:23:22.483887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-11-20 07:23:22.483919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-11-20 07:23:22.484125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-11-20 07:23:22.484167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-11-20 07:23:22.484438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-11-20 07:23:22.484471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-11-20 07:23:22.484665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-11-20 07:23:22.484696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-11-20 07:23:22.484962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-11-20 07:23:22.484995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e3c000b90 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-11-20 07:23:22.485207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-11-20 07:23:22.485241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-11-20 07:23:22.485535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-11-20 07:23:22.485568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-11-20 07:23:22.485783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-11-20 07:23:22.485814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-11-20 07:23:22.486080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-11-20 07:23:22.486113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-11-20 07:23:22.486332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-11-20 07:23:22.486369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-11-20 07:23:22.486631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-11-20 07:23:22.486662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-11-20 07:23:22.486942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-11-20 07:23:22.486981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-11-20 07:23:22.487193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-11-20 07:23:22.487225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-11-20 07:23:22.487483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-11-20 07:23:22.487514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-11-20 07:23:22.487721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-11-20 07:23:22.487752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-11-20 07:23:22.488006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-11-20 07:23:22.488039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-11-20 07:23:22.488136] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:18.059 [2024-11-20 07:23:22.488223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-11-20 07:23:22.488254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 [2024-11-20 07:23:22.488438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-11-20 07:23:22.488469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0e38000b90 with addr=10.0.0.2, port=4420 00:27:18.059 qpair failed and we were unable to recover it. 00:27:18.059 A controller has encountered a failure and is being reset. 00:27:18.059 [2024-11-20 07:23:22.488848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.059 [2024-11-20 07:23:22.488903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1029af0 with addr=10.0.0.2, port=4420 00:27:18.059 [2024-11-20 07:23:22.488930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1029af0 is same with the state(6) to be set 00:27:18.059 [2024-11-20 07:23:22.488981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1029af0 (9): Bad file descriptor 00:27:18.059 [2024-11-20 07:23:22.489014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:27:18.059 [2024-11-20 07:23:22.489036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:27:18.059 [2024-11-20 07:23:22.489066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:27:18.059 Unable to reset the controller. 00:27:18.059 07:23:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.059 07:23:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:18.059 07:23:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.059 07:23:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:18.059 07:23:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.060 07:23:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:18.060 07:23:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.060 07:23:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:18.060 07:23:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.060 07:23:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:18.060 07:23:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.060 07:23:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:18.060 [2024-11-20 07:23:22.516376] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:18.060 07:23:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.060 07:23:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:18.060 07:23:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.060 07:23:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:18.060 07:23:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.060 07:23:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1351122 00:27:18.993 Controller properly reset. 00:27:24.301 Initializing NVMe Controllers 00:27:24.301 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:24.301 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:24.301 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:24.301 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:24.301 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:24.301 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:24.301 Initialization complete. Launching workers. 00:27:24.301 Starting thread on core 1 00:27:24.301 Starting thread on core 2 00:27:24.301 Starting thread on core 3 00:27:24.301 Starting thread on core 0 00:27:24.301 07:23:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:24.301 00:27:24.301 real 0m10.651s 00:27:24.301 user 0m34.794s 00:27:24.301 sys 0m5.992s 00:27:24.301 07:23:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:24.301 07:23:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:24.301 ************************************ 00:27:24.301 END TEST nvmf_target_disconnect_tc2 00:27:24.301 ************************************ 00:27:24.301 07:23:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:24.301 07:23:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:24.301 07:23:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:24.301 07:23:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:24.301 07:23:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:27:24.301 07:23:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:24.301 07:23:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:27:24.301 07:23:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:24.301 07:23:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:24.301 rmmod nvme_tcp 00:27:24.301 rmmod nvme_fabrics 00:27:24.301 rmmod nvme_keyring 00:27:24.301 07:23:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:24.301 07:23:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:27:24.301 07:23:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:27:24.301 07:23:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 1351745 ']' 00:27:24.301 07:23:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 1351745 00:27:24.301 07:23:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' -z 1351745 ']' 00:27:24.301 07:23:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # kill -0 1351745 00:27:24.301 07:23:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # uname 00:27:24.301 07:23:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:24.301 07:23:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1351745 00:27:24.301 07:23:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_4 00:27:24.301 07:23:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_4 = sudo ']' 00:27:24.301 07:23:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1351745' 00:27:24.301 killing process with pid 1351745 00:27:24.301 07:23:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # kill 1351745 00:27:24.301 07:23:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@976 -- # wait 1351745 00:27:24.301 07:23:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:24.301 07:23:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:24.301 07:23:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:24.301 07:23:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:27:24.301 07:23:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:27:24.301 07:23:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:24.301 07:23:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:27:24.301 07:23:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:24.301 07:23:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:24.301 07:23:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:24.301 07:23:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:24.301 07:23:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:26.842 07:23:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:26.842 00:27:26.842 real 0m19.442s 00:27:26.842 user 1m1.718s 00:27:26.842 sys 0m11.119s 00:27:26.842 07:23:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:26.842 07:23:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:26.842 ************************************ 00:27:26.842 END TEST nvmf_target_disconnect 00:27:26.842 ************************************ 00:27:26.842 07:23:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:27:26.842 00:27:26.842 real 5m52.416s 00:27:26.842 user 10m49.478s 00:27:26.842 sys 1m59.954s 00:27:26.842 07:23:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:26.842 07:23:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.842 ************************************ 00:27:26.842 END TEST nvmf_host 00:27:26.842 ************************************ 00:27:26.842 07:23:30 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:27:26.842 07:23:30 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:27:26.842 07:23:30 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:26.842 07:23:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:27:26.842 07:23:30 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:26.842 07:23:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:26.842 ************************************ 00:27:26.842 START TEST nvmf_target_core_interrupt_mode 00:27:26.842 ************************************ 00:27:26.842 07:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:26.842 * Looking for test storage... 00:27:26.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:27:26.842 07:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:26.842 07:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:27:26.842 07:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:26.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.842 --rc genhtml_branch_coverage=1 00:27:26.842 --rc genhtml_function_coverage=1 00:27:26.842 --rc genhtml_legend=1 00:27:26.842 --rc geninfo_all_blocks=1 00:27:26.842 --rc geninfo_unexecuted_blocks=1 00:27:26.842 00:27:26.842 ' 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:26.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.842 --rc genhtml_branch_coverage=1 00:27:26.842 --rc genhtml_function_coverage=1 00:27:26.842 --rc genhtml_legend=1 00:27:26.842 --rc geninfo_all_blocks=1 00:27:26.842 --rc geninfo_unexecuted_blocks=1 00:27:26.842 00:27:26.842 ' 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:26.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.842 --rc genhtml_branch_coverage=1 00:27:26.842 --rc genhtml_function_coverage=1 00:27:26.842 --rc genhtml_legend=1 00:27:26.842 --rc geninfo_all_blocks=1 00:27:26.842 --rc geninfo_unexecuted_blocks=1 00:27:26.842 00:27:26.842 ' 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:26.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.842 --rc genhtml_branch_coverage=1 00:27:26.842 --rc genhtml_function_coverage=1 00:27:26.842 --rc genhtml_legend=1 00:27:26.842 --rc geninfo_all_blocks=1 00:27:26.842 --rc geninfo_unexecuted_blocks=1 00:27:26.842 00:27:26.842 ' 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:26.842 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:26.843 ************************************ 00:27:26.843 START TEST nvmf_abort 00:27:26.843 ************************************ 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:26.843 * Looking for test storage... 00:27:26.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:26.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.843 --rc genhtml_branch_coverage=1 00:27:26.843 --rc genhtml_function_coverage=1 00:27:26.843 --rc genhtml_legend=1 00:27:26.843 --rc geninfo_all_blocks=1 00:27:26.843 --rc geninfo_unexecuted_blocks=1 00:27:26.843 00:27:26.843 ' 00:27:26.843 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:26.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.843 --rc genhtml_branch_coverage=1 00:27:26.844 --rc genhtml_function_coverage=1 00:27:26.844 --rc genhtml_legend=1 00:27:26.844 --rc geninfo_all_blocks=1 00:27:26.844 --rc geninfo_unexecuted_blocks=1 00:27:26.844 00:27:26.844 ' 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:26.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.844 --rc genhtml_branch_coverage=1 00:27:26.844 --rc genhtml_function_coverage=1 00:27:26.844 --rc genhtml_legend=1 00:27:26.844 --rc geninfo_all_blocks=1 00:27:26.844 --rc geninfo_unexecuted_blocks=1 00:27:26.844 00:27:26.844 ' 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:26.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.844 --rc genhtml_branch_coverage=1 00:27:26.844 --rc genhtml_function_coverage=1 00:27:26.844 --rc genhtml_legend=1 00:27:26.844 --rc geninfo_all_blocks=1 00:27:26.844 --rc geninfo_unexecuted_blocks=1 00:27:26.844 00:27:26.844 ' 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:27:26.844 07:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:33.414 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:33.414 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:27:33.414 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:33.414 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:33.414 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:33.414 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:33.414 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:33.414 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:33.415 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:33.415 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:33.415 Found net devices under 0000:86:00.0: cvl_0_0 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:33.415 Found net devices under 0000:86:00.1: cvl_0_1 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:33.415 07:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:33.415 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:33.415 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:33.415 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:33.415 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:33.415 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:33.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:33.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.432 ms 00:27:33.416 00:27:33.416 --- 10.0.0.2 ping statistics --- 00:27:33.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.416 rtt min/avg/max/mdev = 0.432/0.432/0.432/0.000 ms 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:33.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:33.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:27:33.416 00:27:33.416 --- 10.0.0.1 ping statistics --- 00:27:33.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.416 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1356350 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1356350 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 1356350 ']' 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:33.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:33.416 [2024-11-20 07:23:37.312495] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:33.416 [2024-11-20 07:23:37.313436] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:27:33.416 [2024-11-20 07:23:37.313470] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:33.416 [2024-11-20 07:23:37.394018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:33.416 [2024-11-20 07:23:37.436188] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:33.416 [2024-11-20 07:23:37.436227] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:33.416 [2024-11-20 07:23:37.436234] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:33.416 [2024-11-20 07:23:37.436239] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:33.416 [2024-11-20 07:23:37.436245] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:33.416 [2024-11-20 07:23:37.437631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:33.416 [2024-11-20 07:23:37.437736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:33.416 [2024-11-20 07:23:37.437737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:33.416 [2024-11-20 07:23:37.506693] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:33.416 [2024-11-20 07:23:37.507459] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:33.416 [2024-11-20 07:23:37.507670] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:33.416 [2024-11-20 07:23:37.507818] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:33.416 [2024-11-20 07:23:37.574514] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:33.416 Malloc0 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:33.416 Delay0 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:33.416 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.417 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:33.417 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.417 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:33.417 [2024-11-20 07:23:37.666485] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:33.417 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.417 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:33.417 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.417 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:33.417 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.417 07:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:27:33.417 [2024-11-20 07:23:37.755912] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:27:35.317 Initializing NVMe Controllers 00:27:35.317 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:27:35.317 controller IO queue size 128 less than required 00:27:35.317 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:27:35.317 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:27:35.317 Initialization complete. Launching workers. 00:27:35.317 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 36655 00:27:35.317 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36716, failed to submit 66 00:27:35.317 success 36655, unsuccessful 61, failed 0 00:27:35.317 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:35.317 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.317 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:35.317 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.317 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:27:35.317 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:27:35.317 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:35.317 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:27:35.317 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:35.317 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:27:35.317 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:35.317 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:35.317 rmmod nvme_tcp 00:27:35.317 rmmod nvme_fabrics 00:27:35.577 rmmod nvme_keyring 00:27:35.577 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:35.577 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:27:35.577 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:27:35.577 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1356350 ']' 00:27:35.577 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1356350 00:27:35.577 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 1356350 ']' 00:27:35.577 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 1356350 00:27:35.577 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:27:35.577 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:35.577 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1356350 00:27:35.577 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:35.577 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:35.577 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1356350' 00:27:35.577 killing process with pid 1356350 00:27:35.577 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@971 -- # kill 1356350 00:27:35.577 07:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@976 -- # wait 1356350 00:27:35.837 07:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:35.837 07:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:35.837 07:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:35.837 07:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:27:35.837 07:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:27:35.837 07:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:35.837 07:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:27:35.837 07:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:35.837 07:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:35.837 07:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:35.837 07:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:35.837 07:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.742 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:37.742 00:27:37.742 real 0m11.082s 00:27:37.742 user 0m10.164s 00:27:37.742 sys 0m5.701s 00:27:37.742 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:37.742 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:37.742 ************************************ 00:27:37.742 END TEST nvmf_abort 00:27:37.742 ************************************ 00:27:37.742 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:37.742 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:27:37.742 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:37.742 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:38.002 ************************************ 00:27:38.002 START TEST nvmf_ns_hotplug_stress 00:27:38.002 ************************************ 00:27:38.002 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:38.002 * Looking for test storage... 00:27:38.002 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:38.002 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:38.002 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:27:38.002 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:38.002 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:38.002 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:38.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.003 --rc genhtml_branch_coverage=1 00:27:38.003 --rc genhtml_function_coverage=1 00:27:38.003 --rc genhtml_legend=1 00:27:38.003 --rc geninfo_all_blocks=1 00:27:38.003 --rc geninfo_unexecuted_blocks=1 00:27:38.003 00:27:38.003 ' 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:38.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.003 --rc genhtml_branch_coverage=1 00:27:38.003 --rc genhtml_function_coverage=1 00:27:38.003 --rc genhtml_legend=1 00:27:38.003 --rc geninfo_all_blocks=1 00:27:38.003 --rc geninfo_unexecuted_blocks=1 00:27:38.003 00:27:38.003 ' 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:38.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.003 --rc genhtml_branch_coverage=1 00:27:38.003 --rc genhtml_function_coverage=1 00:27:38.003 --rc genhtml_legend=1 00:27:38.003 --rc geninfo_all_blocks=1 00:27:38.003 --rc geninfo_unexecuted_blocks=1 00:27:38.003 00:27:38.003 ' 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:38.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.003 --rc genhtml_branch_coverage=1 00:27:38.003 --rc genhtml_function_coverage=1 00:27:38.003 --rc genhtml_legend=1 00:27:38.003 --rc geninfo_all_blocks=1 00:27:38.003 --rc geninfo_unexecuted_blocks=1 00:27:38.003 00:27:38.003 ' 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.003 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.004 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.004 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:27:38.004 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.004 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:27:38.004 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:38.004 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:38.004 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:38.004 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:38.004 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:38.004 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:38.004 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:38.004 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:38.004 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:38.004 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:38.004 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:38.004 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:27:38.004 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:38.004 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:38.004 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:38.004 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:38.004 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:38.004 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:38.004 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:38.004 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.004 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:38.004 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:38.004 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:27:38.004 07:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:44.574 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:44.574 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:44.574 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:44.575 Found net devices under 0000:86:00.0: cvl_0_0 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:44.575 Found net devices under 0000:86:00.1: cvl_0_1 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:44.575 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:44.575 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.387 ms 00:27:44.575 00:27:44.575 --- 10.0.0.2 ping statistics --- 00:27:44.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:44.575 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:44.575 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:44.575 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:27:44.575 00:27:44.575 --- 10.0.0.1 ping statistics --- 00:27:44.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:44.575 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1360343 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1360343 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 1360343 ']' 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:44.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:44.575 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:44.575 [2024-11-20 07:23:48.463847] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:44.575 [2024-11-20 07:23:48.464835] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:27:44.575 [2024-11-20 07:23:48.464872] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:44.575 [2024-11-20 07:23:48.544978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:44.576 [2024-11-20 07:23:48.587021] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:44.576 [2024-11-20 07:23:48.587057] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:44.576 [2024-11-20 07:23:48.587064] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:44.576 [2024-11-20 07:23:48.587070] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:44.576 [2024-11-20 07:23:48.587075] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:44.576 [2024-11-20 07:23:48.588532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:44.576 [2024-11-20 07:23:48.588638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:44.576 [2024-11-20 07:23:48.588640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:44.576 [2024-11-20 07:23:48.657441] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:44.576 [2024-11-20 07:23:48.658177] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:44.576 [2024-11-20 07:23:48.658437] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:44.576 [2024-11-20 07:23:48.658548] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:44.576 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:44.576 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:27:44.576 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:44.576 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:44.576 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:44.576 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:44.576 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:27:44.576 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:44.576 [2024-11-20 07:23:48.893331] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:44.576 07:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:44.576 07:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:44.847 [2024-11-20 07:23:49.297688] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:44.847 07:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:45.118 07:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:27:45.377 Malloc0 00:27:45.377 07:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:45.377 Delay0 00:27:45.635 07:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:45.635 07:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:27:45.893 NULL1 00:27:45.893 07:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:27:46.151 07:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1360610 00:27:46.151 07:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:27:46.151 07:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360610 00:27:46.151 07:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:47.523 Read completed with error (sct=0, sc=11) 00:27:47.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:47.523 07:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:47.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:47.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:47.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:47.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:47.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:47.523 07:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:27:47.523 07:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:27:47.781 true 00:27:47.781 07:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360610 00:27:47.781 07:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:48.715 07:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:48.715 07:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:27:48.715 07:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:27:48.973 true 00:27:48.973 07:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360610 00:27:48.973 07:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:49.231 07:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:49.488 07:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:27:49.488 07:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:27:49.488 true 00:27:49.488 07:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360610 00:27:49.488 07:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:50.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:50.862 07:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:50.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:50.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:50.862 07:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:27:50.862 07:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:27:51.119 true 00:27:51.119 07:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360610 00:27:51.119 07:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:51.119 07:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:51.377 07:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:27:51.377 07:23:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:27:51.634 true 00:27:51.634 07:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360610 00:27:51.634 07:23:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:53.006 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:53.006 07:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:53.006 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:53.006 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:53.006 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:53.006 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:53.006 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:53.006 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:53.006 07:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:27:53.006 07:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:27:53.264 true 00:27:53.264 07:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360610 00:27:53.264 07:23:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:54.197 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:54.197 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:27:54.197 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:27:54.455 true 00:27:54.455 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360610 00:27:54.455 07:23:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:54.712 07:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:54.969 07:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:27:54.969 07:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:27:54.969 true 00:27:54.969 07:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360610 00:27:54.969 07:23:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:56.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:56.340 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:56.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:56.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:56.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:56.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:56.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:56.340 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:27:56.340 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:27:56.596 true 00:27:56.596 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360610 00:27:56.596 07:24:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:57.527 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:57.527 07:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:57.527 07:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:27:57.527 07:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:27:57.785 true 00:27:57.785 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360610 00:27:57.785 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:58.041 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:58.041 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:27:58.041 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:27:58.298 true 00:27:58.298 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360610 00:27:58.298 07:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:59.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:59.669 07:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:59.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:59.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:59.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:59.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:59.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:59.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:59.669 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:27:59.669 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:27:59.932 true 00:27:59.932 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360610 00:27:59.932 07:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:00.865 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:00.865 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:00.865 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:28:00.865 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:28:01.122 true 00:28:01.122 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360610 00:28:01.122 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:01.380 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:01.380 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:28:01.380 07:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:28:01.636 true 00:28:01.636 07:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360610 00:28:01.636 07:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:03.008 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:03.008 07:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:03.008 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:03.008 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:03.008 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:03.008 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:03.008 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:03.008 07:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:28:03.008 07:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:28:03.265 true 00:28:03.265 07:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360610 00:28:03.265 07:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:04.198 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:04.198 07:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:04.199 07:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:28:04.199 07:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:28:04.456 true 00:28:04.456 07:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360610 00:28:04.456 07:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:04.713 07:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:04.970 07:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:28:04.970 07:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:28:04.970 true 00:28:04.970 07:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360610 00:28:04.970 07:24:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:06.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:06.344 07:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:06.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:06.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:06.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:06.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:06.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:06.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:06.344 07:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:28:06.344 07:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:28:06.602 true 00:28:06.602 07:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360610 00:28:06.602 07:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:07.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:07.535 07:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:07.535 07:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:28:07.535 07:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:28:07.792 true 00:28:07.792 07:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360610 00:28:07.792 07:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:08.050 07:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:08.308 07:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:28:08.308 07:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:28:08.308 true 00:28:08.308 07:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360610 00:28:08.308 07:24:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:09.680 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:09.680 07:24:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:09.680 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:09.680 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:09.680 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:09.680 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:09.680 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:09.680 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:09.680 07:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:28:09.680 07:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:28:09.938 true 00:28:09.938 07:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360610 00:28:09.938 07:24:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:10.871 07:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:10.871 07:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:28:10.871 07:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:28:11.129 true 00:28:11.129 07:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360610 00:28:11.129 07:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:11.387 07:24:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:11.644 07:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:28:11.644 07:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:28:11.645 true 00:28:11.902 07:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360610 00:28:11.902 07:24:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:12.834 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:12.834 07:24:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:13.092 07:24:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:28:13.092 07:24:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:28:13.092 true 00:28:13.092 07:24:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360610 00:28:13.092 07:24:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:13.349 07:24:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:13.607 07:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:28:13.607 07:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:28:13.863 true 00:28:13.863 07:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360610 00:28:13.864 07:24:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:14.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:15.054 07:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:15.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:15.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:15.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:15.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:15.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:15.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:15.054 07:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:28:15.054 07:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:28:15.312 true 00:28:15.312 07:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360610 00:28:15.312 07:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:16.244 07:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:16.502 07:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:28:16.502 07:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:28:16.502 Initializing NVMe Controllers 00:28:16.502 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:16.502 Controller IO queue size 128, less than required. 00:28:16.502 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:16.502 Controller IO queue size 128, less than required. 00:28:16.502 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:16.502 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:16.502 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:16.502 Initialization complete. Launching workers. 00:28:16.502 ======================================================== 00:28:16.502 Latency(us) 00:28:16.502 Device Information : IOPS MiB/s Average min max 00:28:16.502 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2120.24 1.04 42029.47 2748.20 1019802.34 00:28:16.502 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17766.76 8.68 7204.92 1589.52 305498.33 00:28:16.502 ======================================================== 00:28:16.502 Total : 19887.00 9.71 10917.71 1589.52 1019802.34 00:28:16.502 00:28:16.502 true 00:28:16.502 07:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1360610 00:28:16.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1360610) - No such process 00:28:16.502 07:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1360610 00:28:16.502 07:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:16.761 07:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:17.019 07:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:28:17.019 07:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:28:17.019 07:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:28:17.019 07:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:17.019 07:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:28:17.277 null0 00:28:17.277 07:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:17.277 07:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:17.277 07:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:28:17.277 null1 00:28:17.277 07:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:17.277 07:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:17.277 07:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:28:17.536 null2 00:28:17.536 07:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:17.536 07:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:17.536 07:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:28:17.795 null3 00:28:17.795 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:17.795 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:17.795 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:28:17.795 null4 00:28:18.053 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:18.053 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:18.053 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:28:18.053 null5 00:28:18.053 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:18.053 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:18.053 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:28:18.312 null6 00:28:18.312 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:18.312 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:18.312 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:28:18.571 null7 00:28:18.571 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:18.571 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:18.571 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:28:18.571 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:18.571 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:28:18.571 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:18.571 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:28:18.571 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:18.571 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.571 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:18.571 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:18.571 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:18.571 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:18.571 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:18.571 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:28:18.571 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1366451 1366453 1366454 1366456 1366458 1366460 1366461 1366463 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.572 07:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:18.829 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:18.829 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:18.829 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:18.829 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:18.829 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:18.829 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:18.829 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:18.829 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:18.829 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.829 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.829 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:18.829 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.829 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.829 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:18.829 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.829 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.829 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:18.829 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.829 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.829 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:18.829 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.829 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.829 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:18.829 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.829 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.829 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:18.829 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:18.829 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:18.829 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:19.085 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.085 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.085 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:19.085 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:19.085 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:19.085 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:19.085 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:19.085 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:19.085 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:19.085 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:19.085 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:19.342 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.342 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.342 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:19.342 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.342 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.342 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:19.342 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.342 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.342 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:19.342 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.342 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.342 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:19.342 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.342 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.342 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:19.342 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.342 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.342 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:19.342 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.342 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.342 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:19.342 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.342 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.342 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:19.600 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:19.600 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:19.600 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:19.600 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:19.600 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:19.600 07:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:19.600 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:19.600 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:19.920 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.920 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.920 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:19.920 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.920 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.920 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:19.920 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.920 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.920 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:19.920 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.920 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.920 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:19.920 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.920 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.920 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:19.920 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.920 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.920 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:19.920 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.920 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.920 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:19.920 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:19.920 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:19.920 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:19.920 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:19.920 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:19.920 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:19.920 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:19.920 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:19.920 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:19.920 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:19.920 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:20.220 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.220 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.220 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:20.220 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.220 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.220 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:20.220 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.220 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.220 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:20.220 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.220 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.220 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:20.220 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.220 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.220 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:20.220 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.220 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.220 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:20.220 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.220 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.220 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:20.220 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.220 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.220 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:20.526 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:20.526 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:20.526 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:20.526 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:20.526 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:20.526 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:20.526 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:20.526 07:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:20.526 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.526 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.526 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:20.526 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.526 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.526 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:20.526 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.526 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.526 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:20.526 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.526 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.526 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:20.526 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.526 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.526 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:20.526 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.526 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.526 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:20.526 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.526 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.526 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:20.526 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:20.526 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:20.526 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:20.785 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:20.785 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:20.785 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:20.785 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:20.785 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:20.785 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:20.785 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:20.785 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:21.044 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.044 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.044 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:21.044 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.044 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.044 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:21.044 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.044 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.044 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:21.044 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.044 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.044 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:21.044 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.044 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.044 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:21.044 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.044 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.044 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:21.044 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.044 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.044 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:21.044 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.044 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.044 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:21.302 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:21.302 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:21.302 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:21.302 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:21.302 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:21.302 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:21.303 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:21.303 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:21.560 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.560 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.561 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:21.561 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.561 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.561 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.561 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:21.561 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.561 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:21.561 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.561 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.561 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:21.561 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.561 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.561 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:21.561 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.561 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.561 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:21.561 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.561 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.561 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:21.561 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.561 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.561 07:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:21.561 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:21.561 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:21.561 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:21.561 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:21.561 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:21.561 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:21.561 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:21.820 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:21.820 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.820 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.820 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:21.820 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.820 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.820 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:21.820 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.820 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.820 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:21.820 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.820 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.820 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:21.820 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.820 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.820 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:21.820 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.820 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.820 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.820 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:21.820 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.820 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:21.820 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:21.820 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:21.820 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:22.078 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:22.078 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:22.078 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:22.078 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:22.078 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:22.078 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:22.078 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:22.078 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:22.337 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.337 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.337 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:22.337 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.337 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.337 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:22.337 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.337 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.337 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:22.337 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.337 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.337 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.337 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.337 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:22.337 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:22.337 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.337 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.337 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:22.337 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.337 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.337 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:22.337 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.338 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.338 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:22.596 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:22.596 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:22.596 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:22.596 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:22.596 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:22.596 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:22.596 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:22.596 07:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:22.596 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.596 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.596 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.596 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.596 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.596 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.596 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.596 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.596 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.596 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.596 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.596 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.855 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.855 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.855 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:22.855 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:22.855 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:28:22.855 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:28:22.855 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:22.855 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:28:22.855 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:22.855 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:28:22.855 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:22.855 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:22.855 rmmod nvme_tcp 00:28:22.855 rmmod nvme_fabrics 00:28:22.855 rmmod nvme_keyring 00:28:22.855 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:22.855 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:28:22.855 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:28:22.855 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1360343 ']' 00:28:22.855 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1360343 00:28:22.855 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 1360343 ']' 00:28:22.855 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 1360343 00:28:22.855 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:28:22.855 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:22.855 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1360343 00:28:22.855 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:22.855 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:22.856 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1360343' 00:28:22.856 killing process with pid 1360343 00:28:22.856 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 1360343 00:28:22.856 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 1360343 00:28:23.114 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:23.114 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:23.114 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:23.114 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:28:23.114 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:28:23.114 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:23.114 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:28:23.114 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:23.114 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:23.114 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.114 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:23.114 07:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.017 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:25.017 00:28:25.017 real 0m47.239s 00:28:25.017 user 2m56.192s 00:28:25.017 sys 0m20.131s 00:28:25.017 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:25.017 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:25.017 ************************************ 00:28:25.017 END TEST nvmf_ns_hotplug_stress 00:28:25.017 ************************************ 00:28:25.277 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:25.277 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:28:25.277 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:25.277 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:25.277 ************************************ 00:28:25.277 START TEST nvmf_delete_subsystem 00:28:25.277 ************************************ 00:28:25.277 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:25.277 * Looking for test storage... 00:28:25.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:25.277 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:25.277 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:28:25.277 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:25.277 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:25.277 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:25.277 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:25.277 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:25.277 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:28:25.277 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:28:25.277 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:28:25.277 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:28:25.277 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:28:25.277 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:28:25.277 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:28:25.277 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:25.277 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:28:25.277 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:28:25.277 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:25.277 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:25.277 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:28:25.277 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:28:25.277 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:25.277 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:28:25.277 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:28:25.277 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:28:25.277 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:28:25.277 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:25.277 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:28:25.277 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:28:25.277 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:25.277 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:25.277 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:28:25.277 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:25.277 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:25.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.277 --rc genhtml_branch_coverage=1 00:28:25.277 --rc genhtml_function_coverage=1 00:28:25.277 --rc genhtml_legend=1 00:28:25.277 --rc geninfo_all_blocks=1 00:28:25.277 --rc geninfo_unexecuted_blocks=1 00:28:25.277 00:28:25.277 ' 00:28:25.277 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:25.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.277 --rc genhtml_branch_coverage=1 00:28:25.277 --rc genhtml_function_coverage=1 00:28:25.277 --rc genhtml_legend=1 00:28:25.277 --rc geninfo_all_blocks=1 00:28:25.277 --rc geninfo_unexecuted_blocks=1 00:28:25.277 00:28:25.277 ' 00:28:25.277 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:25.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.278 --rc genhtml_branch_coverage=1 00:28:25.278 --rc genhtml_function_coverage=1 00:28:25.278 --rc genhtml_legend=1 00:28:25.278 --rc geninfo_all_blocks=1 00:28:25.278 --rc geninfo_unexecuted_blocks=1 00:28:25.278 00:28:25.278 ' 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:25.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.278 --rc genhtml_branch_coverage=1 00:28:25.278 --rc genhtml_function_coverage=1 00:28:25.278 --rc genhtml_legend=1 00:28:25.278 --rc geninfo_all_blocks=1 00:28:25.278 --rc geninfo_unexecuted_blocks=1 00:28:25.278 00:28:25.278 ' 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:28:25.278 07:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:31.848 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:31.848 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:31.848 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:31.848 Found net devices under 0000:86:00.0: cvl_0_0 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:31.849 Found net devices under 0000:86:00.1: cvl_0_1 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:31.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:31.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.353 ms 00:28:31.849 00:28:31.849 --- 10.0.0.2 ping statistics --- 00:28:31.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:31.849 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:31.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:31.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:28:31.849 00:28:31.849 --- 10.0.0.1 ping statistics --- 00:28:31.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:31.849 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1370784 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1370784 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 1370784 ']' 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:31.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:31.849 [2024-11-20 07:24:35.775329] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:31.849 [2024-11-20 07:24:35.776263] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:28:31.849 [2024-11-20 07:24:35.776297] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:31.849 [2024-11-20 07:24:35.855940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:31.849 [2024-11-20 07:24:35.897318] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:31.849 [2024-11-20 07:24:35.897355] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:31.849 [2024-11-20 07:24:35.897363] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:31.849 [2024-11-20 07:24:35.897369] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:31.849 [2024-11-20 07:24:35.897374] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:31.849 [2024-11-20 07:24:35.898566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:31.849 [2024-11-20 07:24:35.898569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:31.849 [2024-11-20 07:24:35.966446] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:31.849 [2024-11-20 07:24:35.966848] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:31.849 [2024-11-20 07:24:35.967155] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:31.849 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:31.850 07:24:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:31.850 07:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:31.850 07:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:31.850 07:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.850 07:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:31.850 [2024-11-20 07:24:36.035407] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:31.850 07:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.850 07:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:31.850 07:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.850 07:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:31.850 07:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.850 07:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:31.850 07:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.850 07:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:31.850 [2024-11-20 07:24:36.059672] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:31.850 07:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.850 07:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:28:31.850 07:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.850 07:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:31.850 NULL1 00:28:31.850 07:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.850 07:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:31.850 07:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.850 07:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:31.850 Delay0 00:28:31.850 07:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.850 07:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:31.850 07:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.850 07:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:31.850 07:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.850 07:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1370848 00:28:31.850 07:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:28:31.850 07:24:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:31.850 [2024-11-20 07:24:36.173008] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:33.749 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:33.749 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.750 07:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 Write completed with error (sct=0, sc=8) 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 starting I/O failed: -6 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 Write completed with error (sct=0, sc=8) 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 starting I/O failed: -6 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 Write completed with error (sct=0, sc=8) 00:28:34.008 starting I/O failed: -6 00:28:34.008 Write completed with error (sct=0, sc=8) 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 Write completed with error (sct=0, sc=8) 00:28:34.008 starting I/O failed: -6 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 starting I/O failed: -6 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 Write completed with error (sct=0, sc=8) 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 starting I/O failed: -6 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 starting I/O failed: -6 00:28:34.008 Write completed with error (sct=0, sc=8) 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 starting I/O failed: -6 00:28:34.008 Write completed with error (sct=0, sc=8) 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 Write completed with error (sct=0, sc=8) 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 starting I/O failed: -6 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 [2024-11-20 07:24:38.304929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b12c0 is same with the state(6) to be set 00:28:34.008 Write completed with error (sct=0, sc=8) 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 Write completed with error (sct=0, sc=8) 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 Write completed with error (sct=0, sc=8) 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 Write completed with error (sct=0, sc=8) 00:28:34.008 Write completed with error (sct=0, sc=8) 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 Write completed with error (sct=0, sc=8) 00:28:34.008 Write completed with error (sct=0, sc=8) 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 Write completed with error (sct=0, sc=8) 00:28:34.008 Write completed with error (sct=0, sc=8) 00:28:34.008 Write completed with error (sct=0, sc=8) 00:28:34.008 Write completed with error (sct=0, sc=8) 00:28:34.008 Write completed with error (sct=0, sc=8) 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.008 Read completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Write completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Write completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Write completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Write completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Write completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Write completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Write completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Write completed with error (sct=0, sc=8) 00:28:34.009 Write completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Write completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Write completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Write completed with error (sct=0, sc=8) 00:28:34.009 Write completed with error (sct=0, sc=8) 00:28:34.009 Write completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Write completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Write completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Write completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Write completed with error (sct=0, sc=8) 00:28:34.009 Write completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Write completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Write completed with error (sct=0, sc=8) 00:28:34.009 starting I/O failed: -6 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Write completed with error (sct=0, sc=8) 00:28:34.009 Write completed with error (sct=0, sc=8) 00:28:34.009 starting I/O failed: -6 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Write completed with error (sct=0, sc=8) 00:28:34.009 starting I/O failed: -6 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Write completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Write completed with error (sct=0, sc=8) 00:28:34.009 starting I/O failed: -6 00:28:34.009 Write completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 starting I/O failed: -6 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Write completed with error (sct=0, sc=8) 00:28:34.009 Write completed with error (sct=0, sc=8) 00:28:34.009 starting I/O failed: -6 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Write completed with error (sct=0, sc=8) 00:28:34.009 starting I/O failed: -6 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Write completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 starting I/O failed: -6 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Write completed with error (sct=0, sc=8) 00:28:34.009 starting I/O failed: -6 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 starting I/O failed: -6 00:28:34.009 Write completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 starting I/O failed: -6 00:28:34.009 Read completed with error (sct=0, sc=8) 00:28:34.009 [2024-11-20 07:24:38.305697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7faba800d4d0 is same with the state(6) to be set 00:28:34.944 [2024-11-20 07:24:39.268913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b29a0 is same with the state(6) to be set 00:28:34.944 Read completed with error (sct=0, sc=8) 00:28:34.944 Read completed with error (sct=0, sc=8) 00:28:34.944 Read completed with error (sct=0, sc=8) 00:28:34.944 Read completed with error (sct=0, sc=8) 00:28:34.944 Read completed with error (sct=0, sc=8) 00:28:34.944 Write completed with error (sct=0, sc=8) 00:28:34.944 Read completed with error (sct=0, sc=8) 00:28:34.944 Read completed with error (sct=0, sc=8) 00:28:34.944 Read completed with error (sct=0, sc=8) 00:28:34.944 Read completed with error (sct=0, sc=8) 00:28:34.944 Write completed with error (sct=0, sc=8) 00:28:34.944 Read completed with error (sct=0, sc=8) 00:28:34.944 Read completed with error (sct=0, sc=8) 00:28:34.944 Read completed with error (sct=0, sc=8) 00:28:34.944 Read completed with error (sct=0, sc=8) 00:28:34.944 Write completed with error (sct=0, sc=8) 00:28:34.944 Write completed with error (sct=0, sc=8) 00:28:34.944 Read completed with error (sct=0, sc=8) 00:28:34.944 Read completed with error (sct=0, sc=8) 00:28:34.944 Read completed with error (sct=0, sc=8) 00:28:34.944 Read completed with error (sct=0, sc=8) 00:28:34.944 Read completed with error (sct=0, sc=8) 00:28:34.944 Read completed with error (sct=0, sc=8) 00:28:34.944 Read completed with error (sct=0, sc=8) 00:28:34.944 Read completed with error (sct=0, sc=8) 00:28:34.944 Write completed with error (sct=0, sc=8) 00:28:34.944 Read completed with error (sct=0, sc=8) 00:28:34.944 Write completed with error (sct=0, sc=8) 00:28:34.944 Read completed with error (sct=0, sc=8) 00:28:34.944 Read completed with error (sct=0, sc=8) 00:28:34.944 Read completed with error (sct=0, sc=8) 00:28:34.944 Read completed with error (sct=0, sc=8) 00:28:34.944 Read completed with error (sct=0, sc=8) 00:28:34.944 Read completed with error (sct=0, sc=8) 00:28:34.944 Write completed with error (sct=0, sc=8) 00:28:34.944 Read completed with error (sct=0, sc=8) 00:28:34.944 Read completed with error (sct=0, sc=8) 00:28:34.944 Read completed with error (sct=0, sc=8) 00:28:34.944 Read completed with error (sct=0, sc=8) 00:28:34.944 [2024-11-20 07:24:39.308478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7faba800d020 is same with the state(6) to be set 00:28:34.944 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Write completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Write completed with error (sct=0, sc=8) 00:28:34.945 Write completed with error (sct=0, sc=8) 00:28:34.945 Write completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Write completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Write completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Write completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Write completed with error (sct=0, sc=8) 00:28:34.945 [2024-11-20 07:24:39.308690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7faba800d800 is same with the state(6) to be set 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Write completed with error (sct=0, sc=8) 00:28:34.945 Write completed with error (sct=0, sc=8) 00:28:34.945 Write completed with error (sct=0, sc=8) 00:28:34.945 Write completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Write completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Write completed with error (sct=0, sc=8) 00:28:34.945 Write completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Write completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 [2024-11-20 07:24:39.308807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b14a0 is same with the state(6) to be set 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Write completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Write completed with error (sct=0, sc=8) 00:28:34.945 Write completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Write completed with error (sct=0, sc=8) 00:28:34.945 Write completed with error (sct=0, sc=8) 00:28:34.945 Write completed with error (sct=0, sc=8) 00:28:34.945 Write completed with error (sct=0, sc=8) 00:28:34.945 Write completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Write completed with error (sct=0, sc=8) 00:28:34.945 Write completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Write completed with error (sct=0, sc=8) 00:28:34.945 Write completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Write completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Write completed with error (sct=0, sc=8) 00:28:34.945 Write completed with error (sct=0, sc=8) 00:28:34.945 Write completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Write completed with error (sct=0, sc=8) 00:28:34.945 Read completed with error (sct=0, sc=8) 00:28:34.945 Write completed with error (sct=0, sc=8) 00:28:34.945 [2024-11-20 07:24:39.309656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7faba8000c40 is same with the state(6) to be set 00:28:34.945 Initializing NVMe Controllers 00:28:34.945 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:34.945 Controller IO queue size 128, less than required. 00:28:34.945 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:34.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:34.945 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:34.945 Initialization complete. Launching workers. 00:28:34.945 ======================================================== 00:28:34.945 Latency(us) 00:28:34.945 Device Information : IOPS MiB/s Average min max 00:28:34.945 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 151.87 0.07 917646.13 282.15 2003975.97 00:28:34.945 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 165.27 0.08 1125549.15 587.32 2003398.66 00:28:34.945 ======================================================== 00:28:34.945 Total : 317.13 0.15 1025989.96 282.15 2003975.97 00:28:34.945 00:28:34.945 [2024-11-20 07:24:39.310253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b29a0 (9): Bad file descriptor 00:28:34.945 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:34.945 07:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.945 07:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:28:34.945 07:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1370848 00:28:34.945 07:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:28:35.514 07:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:28:35.514 07:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1370848 00:28:35.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1370848) - No such process 00:28:35.514 07:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1370848 00:28:35.514 07:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:28:35.514 07:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1370848 00:28:35.514 07:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:28:35.514 07:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:35.514 07:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:28:35.514 07:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:35.514 07:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1370848 00:28:35.514 07:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:28:35.514 07:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:35.514 07:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:35.514 07:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:35.514 07:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:35.514 07:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.514 07:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:35.514 07:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.514 07:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:35.514 07:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.514 07:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:35.514 [2024-11-20 07:24:39.839577] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:35.514 07:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.514 07:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:35.514 07:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.514 07:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:35.514 07:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.514 07:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1371389 00:28:35.514 07:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:35.514 07:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:28:35.514 07:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1371389 00:28:35.514 07:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:35.514 [2024-11-20 07:24:39.923664] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:36.080 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:36.080 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1371389 00:28:36.080 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:36.339 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:36.339 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1371389 00:28:36.339 07:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:36.905 07:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:36.905 07:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1371389 00:28:36.905 07:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:37.471 07:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:37.471 07:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1371389 00:28:37.471 07:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:38.038 07:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:38.038 07:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1371389 00:28:38.038 07:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:38.605 07:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:38.605 07:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1371389 00:28:38.605 07:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:38.605 Initializing NVMe Controllers 00:28:38.605 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:38.605 Controller IO queue size 128, less than required. 00:28:38.605 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:38.605 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:38.605 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:38.605 Initialization complete. Launching workers. 00:28:38.605 ======================================================== 00:28:38.605 Latency(us) 00:28:38.605 Device Information : IOPS MiB/s Average min max 00:28:38.605 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002761.18 1000136.56 1009016.29 00:28:38.605 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004474.09 1000260.72 1043732.29 00:28:38.605 ======================================================== 00:28:38.605 Total : 256.00 0.12 1003617.63 1000136.56 1043732.29 00:28:38.605 00:28:38.864 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:38.864 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1371389 00:28:38.864 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1371389) - No such process 00:28:38.864 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1371389 00:28:38.864 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:28:38.864 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:28:38.864 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:38.864 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:28:38.865 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:38.865 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:28:38.865 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:38.865 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:38.865 rmmod nvme_tcp 00:28:38.865 rmmod nvme_fabrics 00:28:39.124 rmmod nvme_keyring 00:28:39.124 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:39.124 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:28:39.124 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:28:39.124 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1370784 ']' 00:28:39.124 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1370784 00:28:39.124 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 1370784 ']' 00:28:39.124 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 1370784 00:28:39.124 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:28:39.124 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:39.124 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1370784 00:28:39.124 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:39.124 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:39.124 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1370784' 00:28:39.124 killing process with pid 1370784 00:28:39.124 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 1370784 00:28:39.124 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 1370784 00:28:39.124 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:39.124 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:39.124 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:39.124 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:28:39.383 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:28:39.383 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:39.383 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:28:39.383 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:39.383 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:39.383 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:39.383 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:39.383 07:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:41.291 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:41.291 00:28:41.291 real 0m16.138s 00:28:41.291 user 0m26.133s 00:28:41.291 sys 0m6.170s 00:28:41.291 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:41.291 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:41.291 ************************************ 00:28:41.291 END TEST nvmf_delete_subsystem 00:28:41.291 ************************************ 00:28:41.291 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:41.291 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:28:41.292 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:41.292 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:41.292 ************************************ 00:28:41.292 START TEST nvmf_host_management 00:28:41.292 ************************************ 00:28:41.292 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:41.551 * Looking for test storage... 00:28:41.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:41.551 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:41.551 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:28:41.551 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:41.551 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:41.551 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:41.551 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:41.551 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:41.551 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:28:41.551 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:28:41.551 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:28:41.551 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:28:41.551 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:28:41.551 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:28:41.551 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:28:41.551 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:41.551 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:28:41.551 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:28:41.551 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:41.551 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:41.551 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:28:41.551 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:28:41.551 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:41.551 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:28:41.551 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:28:41.551 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:28:41.551 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:28:41.551 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:41.551 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:28:41.551 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:28:41.551 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:41.551 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:41.551 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:28:41.551 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:41.551 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:41.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:41.551 --rc genhtml_branch_coverage=1 00:28:41.551 --rc genhtml_function_coverage=1 00:28:41.551 --rc genhtml_legend=1 00:28:41.551 --rc geninfo_all_blocks=1 00:28:41.551 --rc geninfo_unexecuted_blocks=1 00:28:41.551 00:28:41.551 ' 00:28:41.551 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:41.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:41.551 --rc genhtml_branch_coverage=1 00:28:41.551 --rc genhtml_function_coverage=1 00:28:41.551 --rc genhtml_legend=1 00:28:41.551 --rc geninfo_all_blocks=1 00:28:41.551 --rc geninfo_unexecuted_blocks=1 00:28:41.551 00:28:41.551 ' 00:28:41.552 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:41.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:41.552 --rc genhtml_branch_coverage=1 00:28:41.552 --rc genhtml_function_coverage=1 00:28:41.552 --rc genhtml_legend=1 00:28:41.552 --rc geninfo_all_blocks=1 00:28:41.552 --rc geninfo_unexecuted_blocks=1 00:28:41.552 00:28:41.552 ' 00:28:41.552 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:41.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:41.552 --rc genhtml_branch_coverage=1 00:28:41.552 --rc genhtml_function_coverage=1 00:28:41.552 --rc genhtml_legend=1 00:28:41.552 --rc geninfo_all_blocks=1 00:28:41.552 --rc geninfo_unexecuted_blocks=1 00:28:41.552 00:28:41.552 ' 00:28:41.552 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:41.552 07:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:28:41.552 07:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:48.120 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:48.120 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:28:48.120 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:48.120 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:48.120 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:48.120 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:48.120 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:48.120 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:28:48.120 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:48.120 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:28:48.120 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:28:48.120 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:28:48.120 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:28:48.120 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:28:48.120 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:28:48.120 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:48.120 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:48.120 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:48.120 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:48.120 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:48.120 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:48.120 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:48.120 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:48.121 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:48.121 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:48.121 Found net devices under 0000:86:00.0: cvl_0_0 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:48.121 Found net devices under 0000:86:00.1: cvl_0_1 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:48.121 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:48.121 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.464 ms 00:28:48.121 00:28:48.121 --- 10.0.0.2 ping statistics --- 00:28:48.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.121 rtt min/avg/max/mdev = 0.464/0.464/0.464/0.000 ms 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:48.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:48.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:28:48.121 00:28:48.121 --- 10.0.0.1 ping statistics --- 00:28:48.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.121 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:48.121 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:28:48.122 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:28:48.122 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:28:48.122 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:48.122 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:48.122 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:48.122 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1375526 00:28:48.122 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1375526 00:28:48.122 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:28:48.122 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 1375526 ']' 00:28:48.122 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:48.122 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:48.122 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:48.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:48.122 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:48.122 07:24:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:48.122 [2024-11-20 07:24:51.974119] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:48.122 [2024-11-20 07:24:51.975108] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:28:48.122 [2024-11-20 07:24:51.975146] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:48.122 [2024-11-20 07:24:52.053664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:48.122 [2024-11-20 07:24:52.097397] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:48.122 [2024-11-20 07:24:52.097437] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:48.122 [2024-11-20 07:24:52.097444] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:48.122 [2024-11-20 07:24:52.097450] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:48.122 [2024-11-20 07:24:52.097456] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:48.122 [2024-11-20 07:24:52.099113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:48.122 [2024-11-20 07:24:52.099221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:48.122 [2024-11-20 07:24:52.099327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:48.122 [2024-11-20 07:24:52.099329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:48.122 [2024-11-20 07:24:52.168739] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:48.122 [2024-11-20 07:24:52.169402] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:48.122 [2024-11-20 07:24:52.169626] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:48.122 [2024-11-20 07:24:52.169956] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:48.122 [2024-11-20 07:24:52.170009] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:48.122 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:48.122 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:28:48.122 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:48.122 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:48.122 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:48.122 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:48.122 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:48.122 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.122 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:48.122 [2024-11-20 07:24:52.236014] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:48.122 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.122 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:28:48.122 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:48.122 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:48.122 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:48.122 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:28:48.122 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:28:48.122 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.122 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:48.122 Malloc0 00:28:48.122 [2024-11-20 07:24:52.328306] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:48.122 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.122 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:28:48.122 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:48.122 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:48.122 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1375567 00:28:48.122 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1375567 /var/tmp/bdevperf.sock 00:28:48.122 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 1375567 ']' 00:28:48.122 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:48.122 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:48.122 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:28:48.122 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:48.122 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:48.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:48.122 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:48.122 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:48.122 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:48.122 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:48.122 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:48.122 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:48.122 { 00:28:48.122 "params": { 00:28:48.122 "name": "Nvme$subsystem", 00:28:48.122 "trtype": "$TEST_TRANSPORT", 00:28:48.122 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:48.122 "adrfam": "ipv4", 00:28:48.122 "trsvcid": "$NVMF_PORT", 00:28:48.122 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:48.122 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:48.122 "hdgst": ${hdgst:-false}, 00:28:48.122 "ddgst": ${ddgst:-false} 00:28:48.122 }, 00:28:48.122 "method": "bdev_nvme_attach_controller" 00:28:48.122 } 00:28:48.122 EOF 00:28:48.122 )") 00:28:48.122 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:48.122 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:48.122 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:48.122 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:48.122 "params": { 00:28:48.122 "name": "Nvme0", 00:28:48.122 "trtype": "tcp", 00:28:48.122 "traddr": "10.0.0.2", 00:28:48.122 "adrfam": "ipv4", 00:28:48.122 "trsvcid": "4420", 00:28:48.122 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:48.122 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:48.122 "hdgst": false, 00:28:48.122 "ddgst": false 00:28:48.122 }, 00:28:48.122 "method": "bdev_nvme_attach_controller" 00:28:48.122 }' 00:28:48.122 [2024-11-20 07:24:52.427609] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:28:48.122 [2024-11-20 07:24:52.427657] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1375567 ] 00:28:48.122 [2024-11-20 07:24:52.505602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.122 [2024-11-20 07:24:52.547084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.381 Running I/O for 10 seconds... 00:28:48.381 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:48.381 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:28:48.381 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:48.381 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.381 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:48.381 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.381 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:48.381 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:28:48.381 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:48.381 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:28:48.381 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:28:48.381 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:28:48.381 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:28:48.381 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:48.381 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:48.381 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:48.381 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.381 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:48.381 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.381 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=78 00:28:48.381 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 78 -ge 100 ']' 00:28:48.381 07:24:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:28:48.641 07:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:28:48.641 07:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:48.641 07:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:48.641 07:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:48.641 07:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.641 07:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:48.641 07:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.641 07:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=671 00:28:48.641 07:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 671 -ge 100 ']' 00:28:48.641 07:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:28:48.641 07:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:28:48.641 07:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:28:48.641 07:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:48.641 07:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.641 07:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:48.641 [2024-11-20 07:24:53.147728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d5d70 is same with the state(6) to be set 00:28:48.641 [2024-11-20 07:24:53.147766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7d5d70 is same with the state(6) to be set 00:28:48.641 07:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.641 07:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:48.641 07:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.641 07:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:48.641 [2024-11-20 07:24:53.154583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.641 [2024-11-20 07:24:53.154613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.641 [2024-11-20 07:24:53.154623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.641 [2024-11-20 07:24:53.154631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.641 [2024-11-20 07:24:53.154639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.641 [2024-11-20 07:24:53.154645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.641 [2024-11-20 07:24:53.154653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:48.641 [2024-11-20 07:24:53.154660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.641 [2024-11-20 07:24:53.154666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a3500 is same with the state(6) to be set 00:28:48.641 [2024-11-20 07:24:53.154702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.641 [2024-11-20 07:24:53.154711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.641 [2024-11-20 07:24:53.154724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.641 [2024-11-20 07:24:53.154731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.641 [2024-11-20 07:24:53.154740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.641 [2024-11-20 07:24:53.154747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.641 [2024-11-20 07:24:53.154755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.641 [2024-11-20 07:24:53.154762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.641 [2024-11-20 07:24:53.154770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.641 [2024-11-20 07:24:53.154777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.641 [2024-11-20 07:24:53.154785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.641 [2024-11-20 07:24:53.154792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.641 [2024-11-20 07:24:53.154800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.641 [2024-11-20 07:24:53.154806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.641 [2024-11-20 07:24:53.154818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.641 [2024-11-20 07:24:53.154825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.641 [2024-11-20 07:24:53.154833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.641 [2024-11-20 07:24:53.154840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.641 [2024-11-20 07:24:53.154848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.641 [2024-11-20 07:24:53.154855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.641 [2024-11-20 07:24:53.154863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.641 [2024-11-20 07:24:53.154869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.641 [2024-11-20 07:24:53.154877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.641 [2024-11-20 07:24:53.154884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.641 [2024-11-20 07:24:53.154892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.641 [2024-11-20 07:24:53.154899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.641 [2024-11-20 07:24:53.154907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.641 [2024-11-20 07:24:53.154914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.641 [2024-11-20 07:24:53.154923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.641 [2024-11-20 07:24:53.154930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.641 [2024-11-20 07:24:53.154938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.642 [2024-11-20 07:24:53.154945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.642 [2024-11-20 07:24:53.154961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.642 [2024-11-20 07:24:53.154967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.642 [2024-11-20 07:24:53.154975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.642 [2024-11-20 07:24:53.154982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.642 [2024-11-20 07:24:53.154990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.642 [2024-11-20 07:24:53.154996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.642 [2024-11-20 07:24:53.155004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.642 [2024-11-20 07:24:53.155013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.642 [2024-11-20 07:24:53.155021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.642 [2024-11-20 07:24:53.155028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.642 [2024-11-20 07:24:53.155036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.642 [2024-11-20 07:24:53.155042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.642 [2024-11-20 07:24:53.155050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.642 [2024-11-20 07:24:53.155056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.642 [2024-11-20 07:24:53.155065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.642 [2024-11-20 07:24:53.155071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.642 [2024-11-20 07:24:53.155079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.642 [2024-11-20 07:24:53.155085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.642 [2024-11-20 07:24:53.155094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.642 [2024-11-20 07:24:53.155100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.642 [2024-11-20 07:24:53.155108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.642 [2024-11-20 07:24:53.155114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.642 [2024-11-20 07:24:53.155122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.642 [2024-11-20 07:24:53.155129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.642 [2024-11-20 07:24:53.155137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.642 [2024-11-20 07:24:53.155143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.642 [2024-11-20 07:24:53.155151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.642 [2024-11-20 07:24:53.155157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.642 [2024-11-20 07:24:53.155165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.642 [2024-11-20 07:24:53.155173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.642 [2024-11-20 07:24:53.155181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.642 [2024-11-20 07:24:53.155189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.642 [2024-11-20 07:24:53.155197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.642 [2024-11-20 07:24:53.155204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.642 [2024-11-20 07:24:53.155212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.642 [2024-11-20 07:24:53.155219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.642 [2024-11-20 07:24:53.155227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.642 [2024-11-20 07:24:53.155234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.642 [2024-11-20 07:24:53.155242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.642 [2024-11-20 07:24:53.155249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.642 [2024-11-20 07:24:53.155257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.642 [2024-11-20 07:24:53.155263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.642 [2024-11-20 07:24:53.155271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.642 [2024-11-20 07:24:53.155278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.642 [2024-11-20 07:24:53.155286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.642 [2024-11-20 07:24:53.155293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.642 [2024-11-20 07:24:53.155301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.642 [2024-11-20 07:24:53.155307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.642 [2024-11-20 07:24:53.155315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.642 [2024-11-20 07:24:53.155322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.642 [2024-11-20 07:24:53.155333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.642 [2024-11-20 07:24:53.155339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.642 [2024-11-20 07:24:53.155347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.642 [2024-11-20 07:24:53.155354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.642 [2024-11-20 07:24:53.155362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.642 [2024-11-20 07:24:53.155369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.642 [2024-11-20 07:24:53.155382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.642 [2024-11-20 07:24:53.155389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.642 [2024-11-20 07:24:53.155397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.642 [2024-11-20 07:24:53.155404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.642 [2024-11-20 07:24:53.155412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.642 [2024-11-20 07:24:53.155419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.642 [2024-11-20 07:24:53.155427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.642 [2024-11-20 07:24:53.155433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.642 [2024-11-20 07:24:53.155441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.642 [2024-11-20 07:24:53.155448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.642 [2024-11-20 07:24:53.155456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.642 [2024-11-20 07:24:53.155463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.642 [2024-11-20 07:24:53.155470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.642 [2024-11-20 07:24:53.155477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.642 [2024-11-20 07:24:53.155485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.642 [2024-11-20 07:24:53.155491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.642 [2024-11-20 07:24:53.155499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.642 [2024-11-20 07:24:53.155506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.642 [2024-11-20 07:24:53.155514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.642 [2024-11-20 07:24:53.155520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.643 [2024-11-20 07:24:53.155528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.643 [2024-11-20 07:24:53.155534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.643 [2024-11-20 07:24:53.155542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.643 [2024-11-20 07:24:53.155548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.643 [2024-11-20 07:24:53.155556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.643 [2024-11-20 07:24:53.155564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.643 [2024-11-20 07:24:53.155574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.643 [2024-11-20 07:24:53.155580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.643 [2024-11-20 07:24:53.155588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.643 [2024-11-20 07:24:53.155594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.643 [2024-11-20 07:24:53.155602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.643 [2024-11-20 07:24:53.155609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.643 [2024-11-20 07:24:53.155617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.643 [2024-11-20 07:24:53.155623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.643 [2024-11-20 07:24:53.155631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.643 [2024-11-20 07:24:53.155638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.643 [2024-11-20 07:24:53.155646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.643 [2024-11-20 07:24:53.155653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.643 [2024-11-20 07:24:53.155661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.643 [2024-11-20 07:24:53.155667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.643 [2024-11-20 07:24:53.156629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:48.643 task offset: 98304 on job bdev=Nvme0n1 fails 00:28:48.643 00:28:48.643 Latency(us) 00:28:48.643 [2024-11-20T06:24:53.199Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:48.643 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:48.643 Job: Nvme0n1 ended in about 0.41 seconds with error 00:28:48.643 Verification LBA range: start 0x0 length 0x400 00:28:48.643 Nvme0n1 : 0.41 1876.94 117.31 156.41 0.00 30630.11 1617.03 27696.08 00:28:48.643 [2024-11-20T06:24:53.199Z] =================================================================================================================== 00:28:48.643 [2024-11-20T06:24:53.199Z] Total : 1876.94 117.31 156.41 0.00 30630.11 1617.03 27696.08 00:28:48.643 [2024-11-20 07:24:53.159017] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:48.643 [2024-11-20 07:24:53.159039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a3500 (9): Bad file descriptor 00:28:48.643 [2024-11-20 07:24:53.161960] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:28:48.643 07:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.643 07:24:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:28:50.018 07:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1375567 00:28:50.018 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1375567) - No such process 00:28:50.018 07:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:28:50.018 07:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:28:50.018 07:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:50.018 07:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:28:50.018 07:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:50.018 07:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:50.018 07:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:50.018 07:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:50.018 { 00:28:50.018 "params": { 00:28:50.018 "name": "Nvme$subsystem", 00:28:50.018 "trtype": "$TEST_TRANSPORT", 00:28:50.018 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.018 "adrfam": "ipv4", 00:28:50.018 "trsvcid": "$NVMF_PORT", 00:28:50.018 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.018 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.018 "hdgst": ${hdgst:-false}, 00:28:50.018 "ddgst": ${ddgst:-false} 00:28:50.018 }, 00:28:50.018 "method": "bdev_nvme_attach_controller" 00:28:50.018 } 00:28:50.018 EOF 00:28:50.018 )") 00:28:50.018 07:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:50.018 07:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:50.018 07:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:50.018 07:24:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:50.018 "params": { 00:28:50.018 "name": "Nvme0", 00:28:50.018 "trtype": "tcp", 00:28:50.018 "traddr": "10.0.0.2", 00:28:50.018 "adrfam": "ipv4", 00:28:50.018 "trsvcid": "4420", 00:28:50.018 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:50.018 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:50.018 "hdgst": false, 00:28:50.018 "ddgst": false 00:28:50.018 }, 00:28:50.018 "method": "bdev_nvme_attach_controller" 00:28:50.018 }' 00:28:50.018 [2024-11-20 07:24:54.221811] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:28:50.019 [2024-11-20 07:24:54.221863] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1375822 ] 00:28:50.019 [2024-11-20 07:24:54.299762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.019 [2024-11-20 07:24:54.339707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:50.277 Running I/O for 1 seconds... 00:28:51.213 1984.00 IOPS, 124.00 MiB/s 00:28:51.213 Latency(us) 00:28:51.213 [2024-11-20T06:24:55.769Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:51.213 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:51.213 Verification LBA range: start 0x0 length 0x400 00:28:51.213 Nvme0n1 : 1.02 2016.42 126.03 0.00 0.00 31237.35 5271.37 27354.16 00:28:51.213 [2024-11-20T06:24:55.769Z] =================================================================================================================== 00:28:51.213 [2024-11-20T06:24:55.769Z] Total : 2016.42 126.03 0.00 0.00 31237.35 5271.37 27354.16 00:28:51.471 07:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:28:51.471 07:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:28:51.471 07:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:51.471 07:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:51.471 07:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:28:51.471 07:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:51.471 07:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:28:51.471 07:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:51.471 07:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:28:51.471 07:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:51.471 07:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:51.471 rmmod nvme_tcp 00:28:51.471 rmmod nvme_fabrics 00:28:51.471 rmmod nvme_keyring 00:28:51.471 07:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:51.471 07:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:28:51.471 07:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:28:51.471 07:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1375526 ']' 00:28:51.471 07:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1375526 00:28:51.471 07:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 1375526 ']' 00:28:51.471 07:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 1375526 00:28:51.471 07:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:28:51.471 07:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:51.471 07:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1375526 00:28:51.471 07:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:51.471 07:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:51.471 07:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1375526' 00:28:51.471 killing process with pid 1375526 00:28:51.471 07:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 1375526 00:28:51.471 07:24:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 1375526 00:28:51.731 [2024-11-20 07:24:56.080983] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:28:51.731 07:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:51.731 07:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:51.731 07:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:51.731 07:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:28:51.731 07:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:28:51.731 07:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:51.731 07:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:28:51.731 07:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:51.731 07:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:51.731 07:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.731 07:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:51.731 07:24:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:53.635 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:53.635 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:28:53.635 00:28:53.635 real 0m12.362s 00:28:53.635 user 0m17.963s 00:28:53.635 sys 0m6.427s 00:28:53.635 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:53.635 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:53.635 ************************************ 00:28:53.635 END TEST nvmf_host_management 00:28:53.635 ************************************ 00:28:53.894 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:53.894 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:28:53.894 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:53.894 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:53.894 ************************************ 00:28:53.894 START TEST nvmf_lvol 00:28:53.894 ************************************ 00:28:53.894 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:28:53.894 * Looking for test storage... 00:28:53.894 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:53.894 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:53.894 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:53.894 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:28:53.894 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:53.894 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:53.894 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:53.894 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:53.894 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:28:53.894 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:28:53.894 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:28:53.894 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:28:53.894 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:28:53.894 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:28:53.894 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:28:53.894 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:53.894 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:28:53.894 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:28:53.894 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:53.894 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:53.894 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:28:53.894 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:28:53.894 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:53.894 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:28:53.894 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:28:53.894 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:28:53.894 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:28:53.894 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:53.894 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:28:53.894 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:28:53.894 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:53.894 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:53.894 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:28:53.894 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:53.894 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:53.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.894 --rc genhtml_branch_coverage=1 00:28:53.894 --rc genhtml_function_coverage=1 00:28:53.894 --rc genhtml_legend=1 00:28:53.894 --rc geninfo_all_blocks=1 00:28:53.894 --rc geninfo_unexecuted_blocks=1 00:28:53.894 00:28:53.894 ' 00:28:53.894 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:53.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.894 --rc genhtml_branch_coverage=1 00:28:53.894 --rc genhtml_function_coverage=1 00:28:53.894 --rc genhtml_legend=1 00:28:53.894 --rc geninfo_all_blocks=1 00:28:53.894 --rc geninfo_unexecuted_blocks=1 00:28:53.894 00:28:53.894 ' 00:28:53.894 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:53.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.894 --rc genhtml_branch_coverage=1 00:28:53.894 --rc genhtml_function_coverage=1 00:28:53.894 --rc genhtml_legend=1 00:28:53.894 --rc geninfo_all_blocks=1 00:28:53.894 --rc geninfo_unexecuted_blocks=1 00:28:53.894 00:28:53.894 ' 00:28:53.894 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:53.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:53.894 --rc genhtml_branch_coverage=1 00:28:53.894 --rc genhtml_function_coverage=1 00:28:53.894 --rc genhtml_legend=1 00:28:53.894 --rc geninfo_all_blocks=1 00:28:53.894 --rc geninfo_unexecuted_blocks=1 00:28:53.894 00:28:53.894 ' 00:28:53.894 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:53.894 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:28:53.894 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:53.895 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:53.895 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:53.895 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:53.895 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:53.895 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:53.895 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:53.895 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:53.895 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:53.895 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:54.154 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:54.154 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:54.154 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:54.154 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:54.154 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:54.154 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:54.154 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:54.154 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:28:54.154 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:54.154 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:54.154 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:54.154 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.154 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.154 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.154 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:28:54.154 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.154 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:28:54.154 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:54.154 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:54.154 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:54.154 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:54.154 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:54.154 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:54.154 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:54.154 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:54.154 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:54.154 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:54.154 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:54.154 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:54.154 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:28:54.154 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:28:54.154 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:54.154 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:28:54.154 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:54.155 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:54.155 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:54.155 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:54.155 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:54.155 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:54.155 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:54.155 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:54.155 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:54.155 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:54.155 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:28:54.155 07:24:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:00.726 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:00.726 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:00.726 Found net devices under 0000:86:00.0: cvl_0_0 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:00.726 Found net devices under 0000:86:00.1: cvl_0_1 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:00.726 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:00.727 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:00.727 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:29:00.727 00:29:00.727 --- 10.0.0.2 ping statistics --- 00:29:00.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.727 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:00.727 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:00.727 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:29:00.727 00:29:00.727 --- 10.0.0.1 ping statistics --- 00:29:00.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.727 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1379578 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1379578 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 1379578 ']' 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:00.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:00.727 [2024-11-20 07:25:04.417281] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:00.727 [2024-11-20 07:25:04.418218] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:29:00.727 [2024-11-20 07:25:04.418251] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:00.727 [2024-11-20 07:25:04.496084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:00.727 [2024-11-20 07:25:04.538464] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:00.727 [2024-11-20 07:25:04.538502] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:00.727 [2024-11-20 07:25:04.538510] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:00.727 [2024-11-20 07:25:04.538516] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:00.727 [2024-11-20 07:25:04.538521] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:00.727 [2024-11-20 07:25:04.539903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:00.727 [2024-11-20 07:25:04.540015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:00.727 [2024-11-20 07:25:04.540015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:00.727 [2024-11-20 07:25:04.608660] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:00.727 [2024-11-20 07:25:04.609378] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:00.727 [2024-11-20 07:25:04.609464] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:00.727 [2024-11-20 07:25:04.609654] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:00.727 [2024-11-20 07:25:04.848795] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:00.727 07:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:00.727 07:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:29:00.727 07:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:00.986 07:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:29:00.986 07:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:29:00.986 07:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:29:01.245 07:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=88338364-d6ab-4aeb-8da3-2892fcfe3ef9 00:29:01.245 07:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 88338364-d6ab-4aeb-8da3-2892fcfe3ef9 lvol 20 00:29:01.503 07:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=b7ab64ed-9e3a-46ff-b8af-e3b550d8ee79 00:29:01.503 07:25:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:01.762 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b7ab64ed-9e3a-46ff-b8af-e3b550d8ee79 00:29:02.021 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:02.021 [2024-11-20 07:25:06.488660] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:02.021 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:02.280 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1380061 00:29:02.280 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:29:02.280 07:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:29:03.218 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot b7ab64ed-9e3a-46ff-b8af-e3b550d8ee79 MY_SNAPSHOT 00:29:03.477 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=aecf5ffb-36a1-444a-9e5f-9dece5c977c3 00:29:03.477 07:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize b7ab64ed-9e3a-46ff-b8af-e3b550d8ee79 30 00:29:03.735 07:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone aecf5ffb-36a1-444a-9e5f-9dece5c977c3 MY_CLONE 00:29:03.994 07:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=d478a1eb-900e-41f5-8433-6d2d9f8914ff 00:29:03.994 07:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate d478a1eb-900e-41f5-8433-6d2d9f8914ff 00:29:04.562 07:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1380061 00:29:12.677 Initializing NVMe Controllers 00:29:12.677 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:12.677 Controller IO queue size 128, less than required. 00:29:12.677 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:12.677 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:29:12.677 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:29:12.677 Initialization complete. Launching workers. 00:29:12.677 ======================================================== 00:29:12.677 Latency(us) 00:29:12.677 Device Information : IOPS MiB/s Average min max 00:29:12.677 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12139.60 47.42 10548.05 2152.45 74400.39 00:29:12.677 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12338.90 48.20 10373.33 631.46 68991.08 00:29:12.677 ======================================================== 00:29:12.677 Total : 24478.50 95.62 10459.98 631.46 74400.39 00:29:12.677 00:29:12.677 07:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:12.934 07:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b7ab64ed-9e3a-46ff-b8af-e3b550d8ee79 00:29:12.934 07:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 88338364-d6ab-4aeb-8da3-2892fcfe3ef9 00:29:13.193 07:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:29:13.193 07:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:29:13.193 07:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:29:13.193 07:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:13.193 07:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:29:13.193 07:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:13.193 07:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:29:13.193 07:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:13.193 07:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:13.193 rmmod nvme_tcp 00:29:13.193 rmmod nvme_fabrics 00:29:13.193 rmmod nvme_keyring 00:29:13.193 07:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:13.193 07:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:29:13.193 07:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:29:13.193 07:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1379578 ']' 00:29:13.193 07:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1379578 00:29:13.193 07:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 1379578 ']' 00:29:13.193 07:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 1379578 00:29:13.193 07:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:29:13.193 07:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:13.193 07:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1379578 00:29:13.452 07:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:13.452 07:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:13.452 07:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1379578' 00:29:13.452 killing process with pid 1379578 00:29:13.452 07:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 1379578 00:29:13.452 07:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 1379578 00:29:13.452 07:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:13.452 07:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:13.452 07:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:13.452 07:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:29:13.452 07:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:29:13.452 07:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:13.452 07:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:29:13.452 07:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:13.452 07:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:13.452 07:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:13.452 07:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:13.452 07:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.034 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:16.034 00:29:16.034 real 0m21.771s 00:29:16.034 user 0m55.344s 00:29:16.034 sys 0m9.932s 00:29:16.034 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:16.034 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:16.034 ************************************ 00:29:16.034 END TEST nvmf_lvol 00:29:16.034 ************************************ 00:29:16.034 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:16.034 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:16.034 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:16.034 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:16.034 ************************************ 00:29:16.034 START TEST nvmf_lvs_grow 00:29:16.034 ************************************ 00:29:16.034 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:16.034 * Looking for test storage... 00:29:16.034 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:16.034 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:16.034 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:29:16.034 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:16.034 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:16.034 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:16.034 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:16.034 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:16.034 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:29:16.034 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:29:16.034 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:29:16.034 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:29:16.034 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:29:16.034 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:29:16.034 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:29:16.034 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:16.034 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:29:16.034 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:29:16.034 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:16.034 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:16.034 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:29:16.034 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:29:16.034 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:16.034 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:29:16.034 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:29:16.034 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:29:16.034 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:29:16.034 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:16.034 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:29:16.034 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:29:16.034 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:16.034 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:16.034 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:29:16.034 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:16.034 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:16.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.034 --rc genhtml_branch_coverage=1 00:29:16.034 --rc genhtml_function_coverage=1 00:29:16.034 --rc genhtml_legend=1 00:29:16.034 --rc geninfo_all_blocks=1 00:29:16.034 --rc geninfo_unexecuted_blocks=1 00:29:16.034 00:29:16.034 ' 00:29:16.034 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:16.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.034 --rc genhtml_branch_coverage=1 00:29:16.034 --rc genhtml_function_coverage=1 00:29:16.034 --rc genhtml_legend=1 00:29:16.034 --rc geninfo_all_blocks=1 00:29:16.034 --rc geninfo_unexecuted_blocks=1 00:29:16.034 00:29:16.034 ' 00:29:16.034 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:16.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.034 --rc genhtml_branch_coverage=1 00:29:16.035 --rc genhtml_function_coverage=1 00:29:16.035 --rc genhtml_legend=1 00:29:16.035 --rc geninfo_all_blocks=1 00:29:16.035 --rc geninfo_unexecuted_blocks=1 00:29:16.035 00:29:16.035 ' 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:16.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.035 --rc genhtml_branch_coverage=1 00:29:16.035 --rc genhtml_function_coverage=1 00:29:16.035 --rc genhtml_legend=1 00:29:16.035 --rc geninfo_all_blocks=1 00:29:16.035 --rc geninfo_unexecuted_blocks=1 00:29:16.035 00:29:16.035 ' 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:29:16.035 07:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:22.669 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:22.669 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:22.669 Found net devices under 0000:86:00.0: cvl_0_0 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:22.669 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:22.670 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:22.670 Found net devices under 0000:86:00.1: cvl_0_1 00:29:22.670 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:22.670 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:22.670 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:29:22.670 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:22.670 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:22.670 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:22.670 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:22.670 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:22.670 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:22.670 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:22.670 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:22.670 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:22.670 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:22.670 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:22.670 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:22.670 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:22.670 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:22.670 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:22.670 07:25:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:22.670 07:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:22.670 07:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:22.670 07:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:22.670 07:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:22.670 07:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:22.670 07:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:22.670 07:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:22.670 07:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:22.670 07:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:22.670 07:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:22.670 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:22.670 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.389 ms 00:29:22.670 00:29:22.670 --- 10.0.0.2 ping statistics --- 00:29:22.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:22.670 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:29:22.670 07:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:22.670 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:22.670 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:29:22.670 00:29:22.670 --- 10.0.0.1 ping statistics --- 00:29:22.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:22.670 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:29:22.670 07:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:22.670 07:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:29:22.670 07:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:22.670 07:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:22.670 07:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:22.670 07:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:22.670 07:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:22.670 07:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:22.670 07:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:22.670 07:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:29:22.670 07:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:22.670 07:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:22.670 07:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:22.670 07:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1385252 00:29:22.670 07:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:22.670 07:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1385252 00:29:22.670 07:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 1385252 ']' 00:29:22.670 07:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:22.670 07:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:22.670 07:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:22.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:22.670 07:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:22.670 07:25:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:22.670 [2024-11-20 07:25:26.311190] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:22.670 [2024-11-20 07:25:26.312140] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:29:22.670 [2024-11-20 07:25:26.312180] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:22.670 [2024-11-20 07:25:26.411408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.670 [2024-11-20 07:25:26.453763] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:22.670 [2024-11-20 07:25:26.453799] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:22.670 [2024-11-20 07:25:26.453806] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:22.670 [2024-11-20 07:25:26.453812] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:22.670 [2024-11-20 07:25:26.453818] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:22.671 [2024-11-20 07:25:26.454375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:22.671 [2024-11-20 07:25:26.522844] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:22.671 [2024-11-20 07:25:26.523068] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:22.671 07:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:22.671 07:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:29:22.671 07:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:22.671 07:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:22.671 07:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:22.671 07:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:22.671 07:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:22.930 [2024-11-20 07:25:27.355044] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:22.930 07:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:29:22.930 07:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:29:22.930 07:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:22.930 07:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:22.930 ************************************ 00:29:22.930 START TEST lvs_grow_clean 00:29:22.930 ************************************ 00:29:22.930 07:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:29:22.930 07:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:22.930 07:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:22.930 07:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:22.930 07:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:22.930 07:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:22.930 07:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:22.930 07:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:22.930 07:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:22.930 07:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:23.189 07:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:23.189 07:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:23.448 07:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=4312fd03-14ea-448d-b5a7-e94a0c2a1a32 00:29:23.448 07:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4312fd03-14ea-448d-b5a7-e94a0c2a1a32 00:29:23.448 07:25:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:23.707 07:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:23.707 07:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:23.707 07:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4312fd03-14ea-448d-b5a7-e94a0c2a1a32 lvol 150 00:29:23.966 07:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=5ad6691d-8605-4662-b5a2-9c7d96f2d354 00:29:23.966 07:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:23.966 07:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:23.966 [2024-11-20 07:25:28.438741] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:23.966 [2024-11-20 07:25:28.438858] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:23.966 true 00:29:23.966 07:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4312fd03-14ea-448d-b5a7-e94a0c2a1a32 00:29:23.966 07:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:24.225 07:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:24.225 07:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:24.485 07:25:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5ad6691d-8605-4662-b5a2-9c7d96f2d354 00:29:24.744 07:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:24.744 [2024-11-20 07:25:29.215209] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:24.744 07:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:25.004 07:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:25.004 07:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1385907 00:29:25.004 07:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:25.004 07:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1385907 /var/tmp/bdevperf.sock 00:29:25.004 07:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 1385907 ']' 00:29:25.004 07:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:25.004 07:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:25.004 07:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:25.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:25.004 07:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:25.004 07:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:25.004 [2024-11-20 07:25:29.466431] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:29:25.004 [2024-11-20 07:25:29.466482] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1385907 ] 00:29:25.004 [2024-11-20 07:25:29.542521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.263 [2024-11-20 07:25:29.586187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.263 07:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:25.263 07:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:29:25.263 07:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:25.523 Nvme0n1 00:29:25.523 07:25:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:25.782 [ 00:29:25.782 { 00:29:25.782 "name": "Nvme0n1", 00:29:25.782 "aliases": [ 00:29:25.782 "5ad6691d-8605-4662-b5a2-9c7d96f2d354" 00:29:25.782 ], 00:29:25.782 "product_name": "NVMe disk", 00:29:25.782 "block_size": 4096, 00:29:25.782 "num_blocks": 38912, 00:29:25.782 "uuid": "5ad6691d-8605-4662-b5a2-9c7d96f2d354", 00:29:25.782 "numa_id": 1, 00:29:25.782 "assigned_rate_limits": { 00:29:25.782 "rw_ios_per_sec": 0, 00:29:25.782 "rw_mbytes_per_sec": 0, 00:29:25.782 "r_mbytes_per_sec": 0, 00:29:25.782 "w_mbytes_per_sec": 0 00:29:25.782 }, 00:29:25.782 "claimed": false, 00:29:25.782 "zoned": false, 00:29:25.782 "supported_io_types": { 00:29:25.782 "read": true, 00:29:25.782 "write": true, 00:29:25.782 "unmap": true, 00:29:25.782 "flush": true, 00:29:25.782 "reset": true, 00:29:25.782 "nvme_admin": true, 00:29:25.782 "nvme_io": true, 00:29:25.782 "nvme_io_md": false, 00:29:25.782 "write_zeroes": true, 00:29:25.782 "zcopy": false, 00:29:25.782 "get_zone_info": false, 00:29:25.782 "zone_management": false, 00:29:25.782 "zone_append": false, 00:29:25.782 "compare": true, 00:29:25.782 "compare_and_write": true, 00:29:25.783 "abort": true, 00:29:25.783 "seek_hole": false, 00:29:25.783 "seek_data": false, 00:29:25.783 "copy": true, 00:29:25.783 "nvme_iov_md": false 00:29:25.783 }, 00:29:25.783 "memory_domains": [ 00:29:25.783 { 00:29:25.783 "dma_device_id": "system", 00:29:25.783 "dma_device_type": 1 00:29:25.783 } 00:29:25.783 ], 00:29:25.783 "driver_specific": { 00:29:25.783 "nvme": [ 00:29:25.783 { 00:29:25.783 "trid": { 00:29:25.783 "trtype": "TCP", 00:29:25.783 "adrfam": "IPv4", 00:29:25.783 "traddr": "10.0.0.2", 00:29:25.783 "trsvcid": "4420", 00:29:25.783 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:25.783 }, 00:29:25.783 "ctrlr_data": { 00:29:25.783 "cntlid": 1, 00:29:25.783 "vendor_id": "0x8086", 00:29:25.783 "model_number": "SPDK bdev Controller", 00:29:25.783 "serial_number": "SPDK0", 00:29:25.783 "firmware_revision": "25.01", 00:29:25.783 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:25.783 "oacs": { 00:29:25.783 "security": 0, 00:29:25.783 "format": 0, 00:29:25.783 "firmware": 0, 00:29:25.783 "ns_manage": 0 00:29:25.783 }, 00:29:25.783 "multi_ctrlr": true, 00:29:25.783 "ana_reporting": false 00:29:25.783 }, 00:29:25.783 "vs": { 00:29:25.783 "nvme_version": "1.3" 00:29:25.783 }, 00:29:25.783 "ns_data": { 00:29:25.783 "id": 1, 00:29:25.783 "can_share": true 00:29:25.783 } 00:29:25.783 } 00:29:25.783 ], 00:29:25.783 "mp_policy": "active_passive" 00:29:25.783 } 00:29:25.783 } 00:29:25.783 ] 00:29:25.783 07:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1385929 00:29:25.783 07:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:25.783 07:25:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:25.783 Running I/O for 10 seconds... 00:29:27.159 Latency(us) 00:29:27.159 [2024-11-20T06:25:31.715Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.159 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:27.159 Nvme0n1 : 1.00 21751.00 84.96 0.00 0.00 0.00 0.00 0.00 00:29:27.159 [2024-11-20T06:25:31.715Z] =================================================================================================================== 00:29:27.159 [2024-11-20T06:25:31.715Z] Total : 21751.00 84.96 0.00 0.00 0.00 0.00 0.00 00:29:27.159 00:29:27.727 07:25:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4312fd03-14ea-448d-b5a7-e94a0c2a1a32 00:29:27.727 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:27.727 Nvme0n1 : 2.00 22242.00 86.88 0.00 0.00 0.00 0.00 0.00 00:29:27.727 [2024-11-20T06:25:32.283Z] =================================================================================================================== 00:29:27.727 [2024-11-20T06:25:32.283Z] Total : 22242.00 86.88 0.00 0.00 0.00 0.00 0.00 00:29:27.727 00:29:27.986 true 00:29:27.986 07:25:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4312fd03-14ea-448d-b5a7-e94a0c2a1a32 00:29:27.986 07:25:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:28.245 07:25:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:28.245 07:25:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:28.245 07:25:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1385929 00:29:28.811 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:28.811 Nvme0n1 : 3.00 22363.33 87.36 0.00 0.00 0.00 0.00 0.00 00:29:28.811 [2024-11-20T06:25:33.367Z] =================================================================================================================== 00:29:28.811 [2024-11-20T06:25:33.367Z] Total : 22363.33 87.36 0.00 0.00 0.00 0.00 0.00 00:29:28.811 00:29:29.747 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:29.747 Nvme0n1 : 4.00 22487.50 87.84 0.00 0.00 0.00 0.00 0.00 00:29:29.747 [2024-11-20T06:25:34.303Z] =================================================================================================================== 00:29:29.747 [2024-11-20T06:25:34.303Z] Total : 22487.50 87.84 0.00 0.00 0.00 0.00 0.00 00:29:29.747 00:29:31.124 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:31.124 Nvme0n1 : 5.00 22574.80 88.18 0.00 0.00 0.00 0.00 0.00 00:29:31.124 [2024-11-20T06:25:35.680Z] =================================================================================================================== 00:29:31.124 [2024-11-20T06:25:35.680Z] Total : 22574.80 88.18 0.00 0.00 0.00 0.00 0.00 00:29:31.124 00:29:32.059 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:32.059 Nvme0n1 : 6.00 22638.50 88.43 0.00 0.00 0.00 0.00 0.00 00:29:32.059 [2024-11-20T06:25:36.615Z] =================================================================================================================== 00:29:32.059 [2024-11-20T06:25:36.615Z] Total : 22638.50 88.43 0.00 0.00 0.00 0.00 0.00 00:29:32.059 00:29:32.996 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:32.996 Nvme0n1 : 7.00 22688.29 88.63 0.00 0.00 0.00 0.00 0.00 00:29:32.996 [2024-11-20T06:25:37.552Z] =================================================================================================================== 00:29:32.996 [2024-11-20T06:25:37.552Z] Total : 22688.29 88.63 0.00 0.00 0.00 0.00 0.00 00:29:32.996 00:29:33.932 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:33.932 Nvme0n1 : 8.00 22725.62 88.77 0.00 0.00 0.00 0.00 0.00 00:29:33.932 [2024-11-20T06:25:38.488Z] =================================================================================================================== 00:29:33.932 [2024-11-20T06:25:38.488Z] Total : 22725.62 88.77 0.00 0.00 0.00 0.00 0.00 00:29:33.932 00:29:34.868 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:34.868 Nvme0n1 : 9.00 22754.67 88.89 0.00 0.00 0.00 0.00 0.00 00:29:34.868 [2024-11-20T06:25:39.424Z] =================================================================================================================== 00:29:34.868 [2024-11-20T06:25:39.424Z] Total : 22754.67 88.89 0.00 0.00 0.00 0.00 0.00 00:29:34.868 00:29:35.805 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:35.805 Nvme0n1 : 10.00 22752.50 88.88 0.00 0.00 0.00 0.00 0.00 00:29:35.805 [2024-11-20T06:25:40.361Z] =================================================================================================================== 00:29:35.805 [2024-11-20T06:25:40.361Z] Total : 22752.50 88.88 0.00 0.00 0.00 0.00 0.00 00:29:35.805 00:29:35.805 00:29:35.805 Latency(us) 00:29:35.805 [2024-11-20T06:25:40.361Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:35.806 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:35.806 Nvme0n1 : 10.01 22750.94 88.87 0.00 0.00 5623.12 3291.05 29177.77 00:29:35.806 [2024-11-20T06:25:40.362Z] =================================================================================================================== 00:29:35.806 [2024-11-20T06:25:40.362Z] Total : 22750.94 88.87 0.00 0.00 5623.12 3291.05 29177.77 00:29:35.806 { 00:29:35.806 "results": [ 00:29:35.806 { 00:29:35.806 "job": "Nvme0n1", 00:29:35.806 "core_mask": "0x2", 00:29:35.806 "workload": "randwrite", 00:29:35.806 "status": "finished", 00:29:35.806 "queue_depth": 128, 00:29:35.806 "io_size": 4096, 00:29:35.806 "runtime": 10.00631, 00:29:35.806 "iops": 22750.944154238674, 00:29:35.806 "mibps": 88.87087560249482, 00:29:35.806 "io_failed": 0, 00:29:35.806 "io_timeout": 0, 00:29:35.806 "avg_latency_us": 5623.12309824697, 00:29:35.806 "min_latency_us": 3291.046956521739, 00:29:35.806 "max_latency_us": 29177.76695652174 00:29:35.806 } 00:29:35.806 ], 00:29:35.806 "core_count": 1 00:29:35.806 } 00:29:35.806 07:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1385907 00:29:35.806 07:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 1385907 ']' 00:29:35.806 07:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 1385907 00:29:35.806 07:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:29:35.806 07:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:35.806 07:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1385907 00:29:36.065 07:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:36.065 07:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:36.065 07:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1385907' 00:29:36.065 killing process with pid 1385907 00:29:36.065 07:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 1385907 00:29:36.065 Received shutdown signal, test time was about 10.000000 seconds 00:29:36.065 00:29:36.065 Latency(us) 00:29:36.065 [2024-11-20T06:25:40.621Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:36.065 [2024-11-20T06:25:40.621Z] =================================================================================================================== 00:29:36.065 [2024-11-20T06:25:40.621Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:36.065 07:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 1385907 00:29:36.065 07:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:36.324 07:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:36.583 07:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4312fd03-14ea-448d-b5a7-e94a0c2a1a32 00:29:36.583 07:25:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:36.841 07:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:36.841 07:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:29:36.841 07:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:36.841 [2024-11-20 07:25:41.310824] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:36.841 07:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4312fd03-14ea-448d-b5a7-e94a0c2a1a32 00:29:36.841 07:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:29:36.841 07:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4312fd03-14ea-448d-b5a7-e94a0c2a1a32 00:29:36.841 07:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:36.841 07:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:36.841 07:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:36.841 07:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:36.841 07:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:36.841 07:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:36.841 07:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:36.841 07:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:36.841 07:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4312fd03-14ea-448d-b5a7-e94a0c2a1a32 00:29:37.100 request: 00:29:37.100 { 00:29:37.100 "uuid": "4312fd03-14ea-448d-b5a7-e94a0c2a1a32", 00:29:37.100 "method": "bdev_lvol_get_lvstores", 00:29:37.100 "req_id": 1 00:29:37.100 } 00:29:37.100 Got JSON-RPC error response 00:29:37.100 response: 00:29:37.100 { 00:29:37.100 "code": -19, 00:29:37.100 "message": "No such device" 00:29:37.100 } 00:29:37.100 07:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:29:37.100 07:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:37.100 07:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:37.100 07:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:37.100 07:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:37.360 aio_bdev 00:29:37.360 07:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5ad6691d-8605-4662-b5a2-9c7d96f2d354 00:29:37.360 07:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=5ad6691d-8605-4662-b5a2-9c7d96f2d354 00:29:37.360 07:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:29:37.360 07:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:29:37.360 07:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:29:37.360 07:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:29:37.360 07:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:37.619 07:25:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5ad6691d-8605-4662-b5a2-9c7d96f2d354 -t 2000 00:29:37.619 [ 00:29:37.619 { 00:29:37.619 "name": "5ad6691d-8605-4662-b5a2-9c7d96f2d354", 00:29:37.619 "aliases": [ 00:29:37.619 "lvs/lvol" 00:29:37.619 ], 00:29:37.619 "product_name": "Logical Volume", 00:29:37.619 "block_size": 4096, 00:29:37.619 "num_blocks": 38912, 00:29:37.619 "uuid": "5ad6691d-8605-4662-b5a2-9c7d96f2d354", 00:29:37.619 "assigned_rate_limits": { 00:29:37.619 "rw_ios_per_sec": 0, 00:29:37.619 "rw_mbytes_per_sec": 0, 00:29:37.619 "r_mbytes_per_sec": 0, 00:29:37.619 "w_mbytes_per_sec": 0 00:29:37.619 }, 00:29:37.619 "claimed": false, 00:29:37.619 "zoned": false, 00:29:37.619 "supported_io_types": { 00:29:37.619 "read": true, 00:29:37.619 "write": true, 00:29:37.619 "unmap": true, 00:29:37.619 "flush": false, 00:29:37.619 "reset": true, 00:29:37.619 "nvme_admin": false, 00:29:37.619 "nvme_io": false, 00:29:37.619 "nvme_io_md": false, 00:29:37.619 "write_zeroes": true, 00:29:37.619 "zcopy": false, 00:29:37.619 "get_zone_info": false, 00:29:37.619 "zone_management": false, 00:29:37.619 "zone_append": false, 00:29:37.619 "compare": false, 00:29:37.619 "compare_and_write": false, 00:29:37.619 "abort": false, 00:29:37.619 "seek_hole": true, 00:29:37.619 "seek_data": true, 00:29:37.619 "copy": false, 00:29:37.619 "nvme_iov_md": false 00:29:37.619 }, 00:29:37.619 "driver_specific": { 00:29:37.619 "lvol": { 00:29:37.619 "lvol_store_uuid": "4312fd03-14ea-448d-b5a7-e94a0c2a1a32", 00:29:37.619 "base_bdev": "aio_bdev", 00:29:37.619 "thin_provision": false, 00:29:37.619 "num_allocated_clusters": 38, 00:29:37.619 "snapshot": false, 00:29:37.619 "clone": false, 00:29:37.619 "esnap_clone": false 00:29:37.619 } 00:29:37.619 } 00:29:37.619 } 00:29:37.619 ] 00:29:37.619 07:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:29:37.619 07:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4312fd03-14ea-448d-b5a7-e94a0c2a1a32 00:29:37.619 07:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:37.877 07:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:37.877 07:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4312fd03-14ea-448d-b5a7-e94a0c2a1a32 00:29:37.877 07:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:38.136 07:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:38.136 07:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5ad6691d-8605-4662-b5a2-9c7d96f2d354 00:29:38.395 07:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4312fd03-14ea-448d-b5a7-e94a0c2a1a32 00:29:38.655 07:25:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:38.655 07:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:38.655 00:29:38.655 real 0m15.763s 00:29:38.655 user 0m15.222s 00:29:38.655 sys 0m1.536s 00:29:38.655 07:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:38.655 07:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:38.655 ************************************ 00:29:38.655 END TEST lvs_grow_clean 00:29:38.655 ************************************ 00:29:38.914 07:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:29:38.914 07:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:38.914 07:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:38.914 07:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:38.914 ************************************ 00:29:38.914 START TEST lvs_grow_dirty 00:29:38.914 ************************************ 00:29:38.914 07:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:29:38.914 07:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:38.914 07:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:38.914 07:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:38.914 07:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:38.914 07:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:38.914 07:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:38.914 07:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:38.914 07:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:38.915 07:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:39.174 07:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:39.174 07:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:39.174 07:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=8e75245a-26e7-4b90-a1d9-fa23aba45ec6 00:29:39.174 07:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:39.174 07:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e75245a-26e7-4b90-a1d9-fa23aba45ec6 00:29:39.433 07:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:39.433 07:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:39.433 07:25:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8e75245a-26e7-4b90-a1d9-fa23aba45ec6 lvol 150 00:29:39.692 07:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=326b3688-68a1-4810-a88b-ff3bb89ea074 00:29:39.692 07:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:39.692 07:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:39.951 [2024-11-20 07:25:44.274756] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:39.951 [2024-11-20 07:25:44.274884] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:39.951 true 00:29:39.951 07:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e75245a-26e7-4b90-a1d9-fa23aba45ec6 00:29:39.951 07:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:39.951 07:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:39.951 07:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:40.210 07:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 326b3688-68a1-4810-a88b-ff3bb89ea074 00:29:40.468 07:25:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:40.727 [2024-11-20 07:25:45.031190] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:40.727 07:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:40.727 07:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1388475 00:29:40.727 07:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:40.727 07:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:40.727 07:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1388475 /var/tmp/bdevperf.sock 00:29:40.727 07:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 1388475 ']' 00:29:40.727 07:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:40.727 07:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:40.727 07:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:40.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:40.727 07:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:40.727 07:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:40.986 [2024-11-20 07:25:45.282669] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:29:40.986 [2024-11-20 07:25:45.282718] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1388475 ] 00:29:40.986 [2024-11-20 07:25:45.356487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:40.986 [2024-11-20 07:25:45.399700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:40.986 07:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:40.986 07:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:29:40.986 07:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:41.245 Nvme0n1 00:29:41.245 07:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:41.504 [ 00:29:41.504 { 00:29:41.504 "name": "Nvme0n1", 00:29:41.504 "aliases": [ 00:29:41.504 "326b3688-68a1-4810-a88b-ff3bb89ea074" 00:29:41.504 ], 00:29:41.504 "product_name": "NVMe disk", 00:29:41.504 "block_size": 4096, 00:29:41.504 "num_blocks": 38912, 00:29:41.504 "uuid": "326b3688-68a1-4810-a88b-ff3bb89ea074", 00:29:41.504 "numa_id": 1, 00:29:41.504 "assigned_rate_limits": { 00:29:41.504 "rw_ios_per_sec": 0, 00:29:41.504 "rw_mbytes_per_sec": 0, 00:29:41.504 "r_mbytes_per_sec": 0, 00:29:41.504 "w_mbytes_per_sec": 0 00:29:41.504 }, 00:29:41.504 "claimed": false, 00:29:41.504 "zoned": false, 00:29:41.504 "supported_io_types": { 00:29:41.504 "read": true, 00:29:41.504 "write": true, 00:29:41.504 "unmap": true, 00:29:41.504 "flush": true, 00:29:41.504 "reset": true, 00:29:41.504 "nvme_admin": true, 00:29:41.504 "nvme_io": true, 00:29:41.504 "nvme_io_md": false, 00:29:41.504 "write_zeroes": true, 00:29:41.504 "zcopy": false, 00:29:41.504 "get_zone_info": false, 00:29:41.504 "zone_management": false, 00:29:41.504 "zone_append": false, 00:29:41.504 "compare": true, 00:29:41.504 "compare_and_write": true, 00:29:41.504 "abort": true, 00:29:41.504 "seek_hole": false, 00:29:41.504 "seek_data": false, 00:29:41.504 "copy": true, 00:29:41.504 "nvme_iov_md": false 00:29:41.504 }, 00:29:41.504 "memory_domains": [ 00:29:41.504 { 00:29:41.504 "dma_device_id": "system", 00:29:41.504 "dma_device_type": 1 00:29:41.504 } 00:29:41.504 ], 00:29:41.504 "driver_specific": { 00:29:41.504 "nvme": [ 00:29:41.504 { 00:29:41.504 "trid": { 00:29:41.504 "trtype": "TCP", 00:29:41.504 "adrfam": "IPv4", 00:29:41.504 "traddr": "10.0.0.2", 00:29:41.504 "trsvcid": "4420", 00:29:41.504 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:41.504 }, 00:29:41.504 "ctrlr_data": { 00:29:41.504 "cntlid": 1, 00:29:41.504 "vendor_id": "0x8086", 00:29:41.504 "model_number": "SPDK bdev Controller", 00:29:41.504 "serial_number": "SPDK0", 00:29:41.504 "firmware_revision": "25.01", 00:29:41.504 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:41.504 "oacs": { 00:29:41.504 "security": 0, 00:29:41.504 "format": 0, 00:29:41.504 "firmware": 0, 00:29:41.504 "ns_manage": 0 00:29:41.504 }, 00:29:41.504 "multi_ctrlr": true, 00:29:41.504 "ana_reporting": false 00:29:41.504 }, 00:29:41.504 "vs": { 00:29:41.504 "nvme_version": "1.3" 00:29:41.504 }, 00:29:41.504 "ns_data": { 00:29:41.504 "id": 1, 00:29:41.504 "can_share": true 00:29:41.504 } 00:29:41.504 } 00:29:41.504 ], 00:29:41.504 "mp_policy": "active_passive" 00:29:41.504 } 00:29:41.504 } 00:29:41.504 ] 00:29:41.504 07:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1388517 00:29:41.504 07:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:41.504 07:25:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:41.763 Running I/O for 10 seconds... 00:29:42.698 Latency(us) 00:29:42.698 [2024-11-20T06:25:47.254Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:42.698 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:42.698 Nvme0n1 : 1.00 22098.00 86.32 0.00 0.00 0.00 0.00 0.00 00:29:42.698 [2024-11-20T06:25:47.254Z] =================================================================================================================== 00:29:42.698 [2024-11-20T06:25:47.254Z] Total : 22098.00 86.32 0.00 0.00 0.00 0.00 0.00 00:29:42.698 00:29:43.635 07:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8e75245a-26e7-4b90-a1d9-fa23aba45ec6 00:29:43.635 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:43.635 Nvme0n1 : 2.00 22479.00 87.81 0.00 0.00 0.00 0.00 0.00 00:29:43.635 [2024-11-20T06:25:48.191Z] =================================================================================================================== 00:29:43.635 [2024-11-20T06:25:48.191Z] Total : 22479.00 87.81 0.00 0.00 0.00 0.00 0.00 00:29:43.635 00:29:43.635 true 00:29:43.894 07:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e75245a-26e7-4b90-a1d9-fa23aba45ec6 00:29:43.894 07:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:43.894 07:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:43.894 07:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:43.894 07:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1388517 00:29:44.836 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:44.836 Nvme0n1 : 3.00 22606.00 88.30 0.00 0.00 0.00 0.00 0.00 00:29:44.836 [2024-11-20T06:25:49.392Z] =================================================================================================================== 00:29:44.836 [2024-11-20T06:25:49.392Z] Total : 22606.00 88.30 0.00 0.00 0.00 0.00 0.00 00:29:44.836 00:29:45.772 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:45.772 Nvme0n1 : 4.00 22669.50 88.55 0.00 0.00 0.00 0.00 0.00 00:29:45.772 [2024-11-20T06:25:50.328Z] =================================================================================================================== 00:29:45.772 [2024-11-20T06:25:50.328Z] Total : 22669.50 88.55 0.00 0.00 0.00 0.00 0.00 00:29:45.772 00:29:46.707 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:46.707 Nvme0n1 : 5.00 22656.80 88.50 0.00 0.00 0.00 0.00 0.00 00:29:46.707 [2024-11-20T06:25:51.263Z] =================================================================================================================== 00:29:46.707 [2024-11-20T06:25:51.263Z] Total : 22656.80 88.50 0.00 0.00 0.00 0.00 0.00 00:29:46.707 00:29:47.643 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:47.643 Nvme0n1 : 6.00 22711.83 88.72 0.00 0.00 0.00 0.00 0.00 00:29:47.643 [2024-11-20T06:25:52.199Z] =================================================================================================================== 00:29:47.643 [2024-11-20T06:25:52.199Z] Total : 22711.83 88.72 0.00 0.00 0.00 0.00 0.00 00:29:47.643 00:29:48.579 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:48.579 Nvme0n1 : 7.00 22751.14 88.87 0.00 0.00 0.00 0.00 0.00 00:29:48.579 [2024-11-20T06:25:53.135Z] =================================================================================================================== 00:29:48.579 [2024-11-20T06:25:53.135Z] Total : 22751.14 88.87 0.00 0.00 0.00 0.00 0.00 00:29:48.579 00:29:49.956 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:49.956 Nvme0n1 : 8.00 22780.62 88.99 0.00 0.00 0.00 0.00 0.00 00:29:49.956 [2024-11-20T06:25:54.512Z] =================================================================================================================== 00:29:49.956 [2024-11-20T06:25:54.512Z] Total : 22780.62 88.99 0.00 0.00 0.00 0.00 0.00 00:29:49.956 00:29:50.892 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:50.892 Nvme0n1 : 9.00 22817.67 89.13 0.00 0.00 0.00 0.00 0.00 00:29:50.892 [2024-11-20T06:25:55.448Z] =================================================================================================================== 00:29:50.892 [2024-11-20T06:25:55.448Z] Total : 22817.67 89.13 0.00 0.00 0.00 0.00 0.00 00:29:50.892 00:29:51.827 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:51.827 Nvme0n1 : 10.00 22847.30 89.25 0.00 0.00 0.00 0.00 0.00 00:29:51.827 [2024-11-20T06:25:56.383Z] =================================================================================================================== 00:29:51.827 [2024-11-20T06:25:56.383Z] Total : 22847.30 89.25 0.00 0.00 0.00 0.00 0.00 00:29:51.827 00:29:51.827 00:29:51.827 Latency(us) 00:29:51.827 [2024-11-20T06:25:56.383Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:51.827 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:51.827 Nvme0n1 : 10.01 22847.65 89.25 0.00 0.00 5599.32 4900.95 26100.42 00:29:51.827 [2024-11-20T06:25:56.383Z] =================================================================================================================== 00:29:51.827 [2024-11-20T06:25:56.383Z] Total : 22847.65 89.25 0.00 0.00 5599.32 4900.95 26100.42 00:29:51.827 { 00:29:51.827 "results": [ 00:29:51.827 { 00:29:51.827 "job": "Nvme0n1", 00:29:51.827 "core_mask": "0x2", 00:29:51.827 "workload": "randwrite", 00:29:51.827 "status": "finished", 00:29:51.827 "queue_depth": 128, 00:29:51.827 "io_size": 4096, 00:29:51.827 "runtime": 10.005449, 00:29:51.827 "iops": 22847.65031534317, 00:29:51.827 "mibps": 89.24863404430926, 00:29:51.827 "io_failed": 0, 00:29:51.827 "io_timeout": 0, 00:29:51.827 "avg_latency_us": 5599.321525308098, 00:29:51.827 "min_latency_us": 4900.953043478261, 00:29:51.827 "max_latency_us": 26100.424347826087 00:29:51.827 } 00:29:51.827 ], 00:29:51.827 "core_count": 1 00:29:51.827 } 00:29:51.827 07:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1388475 00:29:51.827 07:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 1388475 ']' 00:29:51.827 07:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 1388475 00:29:51.827 07:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:29:51.827 07:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:51.827 07:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1388475 00:29:51.827 07:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:51.827 07:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:51.827 07:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1388475' 00:29:51.827 killing process with pid 1388475 00:29:51.827 07:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 1388475 00:29:51.827 Received shutdown signal, test time was about 10.000000 seconds 00:29:51.827 00:29:51.827 Latency(us) 00:29:51.827 [2024-11-20T06:25:56.383Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:51.827 [2024-11-20T06:25:56.383Z] =================================================================================================================== 00:29:51.827 [2024-11-20T06:25:56.383Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:51.827 07:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 1388475 00:29:51.827 07:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:52.086 07:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:52.344 07:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e75245a-26e7-4b90-a1d9-fa23aba45ec6 00:29:52.344 07:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:52.603 07:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:52.603 07:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:29:52.603 07:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1385252 00:29:52.603 07:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1385252 00:29:52.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1385252 Killed "${NVMF_APP[@]}" "$@" 00:29:52.603 07:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:29:52.603 07:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:29:52.603 07:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:52.603 07:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:52.603 07:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:52.603 07:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1390341 00:29:52.603 07:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1390341 00:29:52.603 07:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:52.603 07:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 1390341 ']' 00:29:52.603 07:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:52.603 07:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:52.603 07:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:52.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:52.603 07:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:52.603 07:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:52.603 [2024-11-20 07:25:57.021543] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:52.603 [2024-11-20 07:25:57.022489] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:29:52.603 [2024-11-20 07:25:57.022525] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:52.603 [2024-11-20 07:25:57.099024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.603 [2024-11-20 07:25:57.140120] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:52.603 [2024-11-20 07:25:57.140160] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:52.603 [2024-11-20 07:25:57.140167] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:52.603 [2024-11-20 07:25:57.140174] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:52.603 [2024-11-20 07:25:57.140179] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:52.603 [2024-11-20 07:25:57.140733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.863 [2024-11-20 07:25:57.209462] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:52.863 [2024-11-20 07:25:57.209685] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:52.863 07:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:52.863 07:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:29:52.863 07:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:52.863 07:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:52.863 07:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:52.863 07:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:52.863 07:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:53.122 [2024-11-20 07:25:57.450206] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:29:53.122 [2024-11-20 07:25:57.450398] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:29:53.122 [2024-11-20 07:25:57.450481] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:29:53.122 07:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:29:53.122 07:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 326b3688-68a1-4810-a88b-ff3bb89ea074 00:29:53.122 07:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=326b3688-68a1-4810-a88b-ff3bb89ea074 00:29:53.122 07:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:29:53.122 07:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:29:53.122 07:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:29:53.122 07:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:29:53.122 07:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:53.382 07:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 326b3688-68a1-4810-a88b-ff3bb89ea074 -t 2000 00:29:53.382 [ 00:29:53.382 { 00:29:53.382 "name": "326b3688-68a1-4810-a88b-ff3bb89ea074", 00:29:53.382 "aliases": [ 00:29:53.382 "lvs/lvol" 00:29:53.382 ], 00:29:53.382 "product_name": "Logical Volume", 00:29:53.382 "block_size": 4096, 00:29:53.382 "num_blocks": 38912, 00:29:53.382 "uuid": "326b3688-68a1-4810-a88b-ff3bb89ea074", 00:29:53.382 "assigned_rate_limits": { 00:29:53.382 "rw_ios_per_sec": 0, 00:29:53.382 "rw_mbytes_per_sec": 0, 00:29:53.382 "r_mbytes_per_sec": 0, 00:29:53.382 "w_mbytes_per_sec": 0 00:29:53.382 }, 00:29:53.382 "claimed": false, 00:29:53.382 "zoned": false, 00:29:53.382 "supported_io_types": { 00:29:53.383 "read": true, 00:29:53.383 "write": true, 00:29:53.383 "unmap": true, 00:29:53.383 "flush": false, 00:29:53.383 "reset": true, 00:29:53.383 "nvme_admin": false, 00:29:53.383 "nvme_io": false, 00:29:53.383 "nvme_io_md": false, 00:29:53.383 "write_zeroes": true, 00:29:53.383 "zcopy": false, 00:29:53.383 "get_zone_info": false, 00:29:53.383 "zone_management": false, 00:29:53.383 "zone_append": false, 00:29:53.383 "compare": false, 00:29:53.383 "compare_and_write": false, 00:29:53.383 "abort": false, 00:29:53.383 "seek_hole": true, 00:29:53.383 "seek_data": true, 00:29:53.383 "copy": false, 00:29:53.383 "nvme_iov_md": false 00:29:53.383 }, 00:29:53.383 "driver_specific": { 00:29:53.383 "lvol": { 00:29:53.383 "lvol_store_uuid": "8e75245a-26e7-4b90-a1d9-fa23aba45ec6", 00:29:53.383 "base_bdev": "aio_bdev", 00:29:53.383 "thin_provision": false, 00:29:53.383 "num_allocated_clusters": 38, 00:29:53.383 "snapshot": false, 00:29:53.383 "clone": false, 00:29:53.383 "esnap_clone": false 00:29:53.383 } 00:29:53.383 } 00:29:53.383 } 00:29:53.383 ] 00:29:53.383 07:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:29:53.383 07:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e75245a-26e7-4b90-a1d9-fa23aba45ec6 00:29:53.383 07:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:29:53.644 07:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:29:53.644 07:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e75245a-26e7-4b90-a1d9-fa23aba45ec6 00:29:53.644 07:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:29:53.904 07:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:29:53.904 07:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:53.905 [2024-11-20 07:25:58.425261] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:54.164 07:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e75245a-26e7-4b90-a1d9-fa23aba45ec6 00:29:54.164 07:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:29:54.164 07:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e75245a-26e7-4b90-a1d9-fa23aba45ec6 00:29:54.164 07:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:54.164 07:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:54.164 07:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:54.164 07:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:54.164 07:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:54.164 07:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:54.164 07:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:54.164 07:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:54.164 07:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e75245a-26e7-4b90-a1d9-fa23aba45ec6 00:29:54.164 request: 00:29:54.164 { 00:29:54.164 "uuid": "8e75245a-26e7-4b90-a1d9-fa23aba45ec6", 00:29:54.164 "method": "bdev_lvol_get_lvstores", 00:29:54.164 "req_id": 1 00:29:54.164 } 00:29:54.164 Got JSON-RPC error response 00:29:54.164 response: 00:29:54.164 { 00:29:54.164 "code": -19, 00:29:54.164 "message": "No such device" 00:29:54.164 } 00:29:54.164 07:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:29:54.164 07:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:54.164 07:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:54.164 07:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:54.164 07:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:54.423 aio_bdev 00:29:54.423 07:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 326b3688-68a1-4810-a88b-ff3bb89ea074 00:29:54.423 07:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=326b3688-68a1-4810-a88b-ff3bb89ea074 00:29:54.423 07:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:29:54.423 07:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:29:54.423 07:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:29:54.423 07:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:29:54.423 07:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:54.682 07:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 326b3688-68a1-4810-a88b-ff3bb89ea074 -t 2000 00:29:54.941 [ 00:29:54.941 { 00:29:54.941 "name": "326b3688-68a1-4810-a88b-ff3bb89ea074", 00:29:54.941 "aliases": [ 00:29:54.941 "lvs/lvol" 00:29:54.941 ], 00:29:54.941 "product_name": "Logical Volume", 00:29:54.941 "block_size": 4096, 00:29:54.941 "num_blocks": 38912, 00:29:54.941 "uuid": "326b3688-68a1-4810-a88b-ff3bb89ea074", 00:29:54.941 "assigned_rate_limits": { 00:29:54.941 "rw_ios_per_sec": 0, 00:29:54.941 "rw_mbytes_per_sec": 0, 00:29:54.941 "r_mbytes_per_sec": 0, 00:29:54.941 "w_mbytes_per_sec": 0 00:29:54.941 }, 00:29:54.941 "claimed": false, 00:29:54.941 "zoned": false, 00:29:54.941 "supported_io_types": { 00:29:54.941 "read": true, 00:29:54.941 "write": true, 00:29:54.941 "unmap": true, 00:29:54.941 "flush": false, 00:29:54.941 "reset": true, 00:29:54.941 "nvme_admin": false, 00:29:54.941 "nvme_io": false, 00:29:54.941 "nvme_io_md": false, 00:29:54.941 "write_zeroes": true, 00:29:54.941 "zcopy": false, 00:29:54.941 "get_zone_info": false, 00:29:54.941 "zone_management": false, 00:29:54.941 "zone_append": false, 00:29:54.941 "compare": false, 00:29:54.941 "compare_and_write": false, 00:29:54.941 "abort": false, 00:29:54.941 "seek_hole": true, 00:29:54.941 "seek_data": true, 00:29:54.941 "copy": false, 00:29:54.941 "nvme_iov_md": false 00:29:54.941 }, 00:29:54.941 "driver_specific": { 00:29:54.941 "lvol": { 00:29:54.941 "lvol_store_uuid": "8e75245a-26e7-4b90-a1d9-fa23aba45ec6", 00:29:54.941 "base_bdev": "aio_bdev", 00:29:54.941 "thin_provision": false, 00:29:54.941 "num_allocated_clusters": 38, 00:29:54.941 "snapshot": false, 00:29:54.941 "clone": false, 00:29:54.941 "esnap_clone": false 00:29:54.941 } 00:29:54.941 } 00:29:54.941 } 00:29:54.941 ] 00:29:54.941 07:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:29:54.941 07:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e75245a-26e7-4b90-a1d9-fa23aba45ec6 00:29:54.941 07:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:54.941 07:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:54.941 07:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:54.941 07:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e75245a-26e7-4b90-a1d9-fa23aba45ec6 00:29:55.200 07:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:55.200 07:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 326b3688-68a1-4810-a88b-ff3bb89ea074 00:29:55.459 07:25:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8e75245a-26e7-4b90-a1d9-fa23aba45ec6 00:29:55.718 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:55.976 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:55.976 00:29:55.976 real 0m17.063s 00:29:55.976 user 0m34.388s 00:29:55.976 sys 0m3.975s 00:29:55.976 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:55.976 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:55.976 ************************************ 00:29:55.976 END TEST lvs_grow_dirty 00:29:55.976 ************************************ 00:29:55.976 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:29:55.976 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:29:55.976 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:29:55.976 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:29:55.976 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:29:55.976 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:29:55.976 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:29:55.976 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:29:55.976 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:29:55.976 nvmf_trace.0 00:29:55.976 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:29:55.976 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:29:55.976 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:55.976 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:29:55.976 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:55.976 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:29:55.976 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:55.976 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:55.976 rmmod nvme_tcp 00:29:55.976 rmmod nvme_fabrics 00:29:55.976 rmmod nvme_keyring 00:29:55.976 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:55.976 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:29:55.976 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:29:55.976 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1390341 ']' 00:29:55.976 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1390341 00:29:55.976 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 1390341 ']' 00:29:55.976 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 1390341 00:29:55.977 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:29:55.977 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:55.977 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1390341 00:29:55.977 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:55.977 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:55.977 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1390341' 00:29:55.977 killing process with pid 1390341 00:29:55.977 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 1390341 00:29:55.977 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 1390341 00:29:56.236 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:56.236 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:56.236 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:56.236 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:29:56.236 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:29:56.236 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:56.236 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:29:56.236 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:56.236 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:56.236 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:56.236 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:56.236 07:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:58.774 07:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:58.774 00:29:58.774 real 0m42.662s 00:29:58.774 user 0m52.299s 00:29:58.774 sys 0m10.446s 00:29:58.774 07:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:58.774 07:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:58.774 ************************************ 00:29:58.774 END TEST nvmf_lvs_grow 00:29:58.774 ************************************ 00:29:58.774 07:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:58.774 07:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:58.774 07:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:58.774 07:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:58.774 ************************************ 00:29:58.774 START TEST nvmf_bdev_io_wait 00:29:58.774 ************************************ 00:29:58.774 07:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:29:58.774 * Looking for test storage... 00:29:58.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:58.774 07:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:58.774 07:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:29:58.774 07:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:58.774 07:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:58.774 07:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:58.774 07:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:58.774 07:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:58.774 07:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:29:58.774 07:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:29:58.774 07:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:29:58.774 07:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:29:58.774 07:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:29:58.774 07:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:29:58.774 07:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:29:58.775 07:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:58.775 07:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:29:58.775 07:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:29:58.775 07:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:58.775 07:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:58.775 07:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:29:58.775 07:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:29:58.775 07:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:58.775 07:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:29:58.775 07:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:58.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.775 --rc genhtml_branch_coverage=1 00:29:58.775 --rc genhtml_function_coverage=1 00:29:58.775 --rc genhtml_legend=1 00:29:58.775 --rc geninfo_all_blocks=1 00:29:58.775 --rc geninfo_unexecuted_blocks=1 00:29:58.775 00:29:58.775 ' 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:58.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.775 --rc genhtml_branch_coverage=1 00:29:58.775 --rc genhtml_function_coverage=1 00:29:58.775 --rc genhtml_legend=1 00:29:58.775 --rc geninfo_all_blocks=1 00:29:58.775 --rc geninfo_unexecuted_blocks=1 00:29:58.775 00:29:58.775 ' 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:58.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.775 --rc genhtml_branch_coverage=1 00:29:58.775 --rc genhtml_function_coverage=1 00:29:58.775 --rc genhtml_legend=1 00:29:58.775 --rc geninfo_all_blocks=1 00:29:58.775 --rc geninfo_unexecuted_blocks=1 00:29:58.775 00:29:58.775 ' 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:58.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.775 --rc genhtml_branch_coverage=1 00:29:58.775 --rc genhtml_function_coverage=1 00:29:58.775 --rc genhtml_legend=1 00:29:58.775 --rc geninfo_all_blocks=1 00:29:58.775 --rc geninfo_unexecuted_blocks=1 00:29:58.775 00:29:58.775 ' 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:58.775 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:58.776 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:58.776 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:58.776 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:58.776 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:58.776 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:58.776 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:58.776 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:58.776 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:58.776 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:29:58.776 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:58.776 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:58.776 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:58.776 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:58.776 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:58.776 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:58.776 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:58.776 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:58.776 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:58.776 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:58.776 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:29:58.776 07:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:05.348 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:05.348 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:30:05.348 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:05.348 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:05.349 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:05.349 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:05.349 Found net devices under 0000:86:00.0: cvl_0_0 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:05.349 Found net devices under 0000:86:00.1: cvl_0_1 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:05.349 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:05.350 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:05.350 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:05.350 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:05.350 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:05.350 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:05.350 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:05.350 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:05.350 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:05.350 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:05.350 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:05.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:05.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.428 ms 00:30:05.350 00:30:05.350 --- 10.0.0.2 ping statistics --- 00:30:05.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.350 rtt min/avg/max/mdev = 0.428/0.428/0.428/0.000 ms 00:30:05.350 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:05.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:05.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:30:05.350 00:30:05.350 --- 10.0.0.1 ping statistics --- 00:30:05.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.350 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:30:05.350 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:05.350 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:30:05.350 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:05.350 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:05.350 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:05.350 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:05.350 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:05.350 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:05.350 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:05.350 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:05.350 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:05.350 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:05.350 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:05.350 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1394392 00:30:05.350 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1394392 00:30:05.350 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:30:05.350 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 1394392 ']' 00:30:05.350 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:05.350 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:05.350 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:05.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:05.350 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:05.350 07:26:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:05.350 [2024-11-20 07:26:09.012921] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:05.350 [2024-11-20 07:26:09.013840] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:30:05.350 [2024-11-20 07:26:09.013873] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:05.350 [2024-11-20 07:26:09.092350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:05.350 [2024-11-20 07:26:09.135968] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:05.350 [2024-11-20 07:26:09.136019] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:05.350 [2024-11-20 07:26:09.136026] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:05.350 [2024-11-20 07:26:09.136032] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:05.350 [2024-11-20 07:26:09.136037] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:05.350 [2024-11-20 07:26:09.137459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:05.350 [2024-11-20 07:26:09.137566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:05.350 [2024-11-20 07:26:09.137676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:05.350 [2024-11-20 07:26:09.137677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:05.350 [2024-11-20 07:26:09.137934] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:05.350 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:05.350 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:30:05.350 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:05.350 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:05.350 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:05.350 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:05.350 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:30:05.350 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.350 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:05.350 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.350 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:30:05.350 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.350 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:05.350 [2024-11-20 07:26:09.266862] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:05.350 [2024-11-20 07:26:09.267176] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:05.350 [2024-11-20 07:26:09.267647] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:05.350 [2024-11-20 07:26:09.267718] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:05.350 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.350 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:05.350 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.350 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:05.350 [2024-11-20 07:26:09.278313] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:05.350 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.350 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:05.350 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.350 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:05.350 Malloc0 00:30:05.350 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.350 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:05.350 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.350 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:05.350 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.350 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:05.350 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:05.351 [2024-11-20 07:26:09.346369] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1394417 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1394419 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:05.351 { 00:30:05.351 "params": { 00:30:05.351 "name": "Nvme$subsystem", 00:30:05.351 "trtype": "$TEST_TRANSPORT", 00:30:05.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.351 "adrfam": "ipv4", 00:30:05.351 "trsvcid": "$NVMF_PORT", 00:30:05.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.351 "hdgst": ${hdgst:-false}, 00:30:05.351 "ddgst": ${ddgst:-false} 00:30:05.351 }, 00:30:05.351 "method": "bdev_nvme_attach_controller" 00:30:05.351 } 00:30:05.351 EOF 00:30:05.351 )") 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1394421 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:05.351 { 00:30:05.351 "params": { 00:30:05.351 "name": "Nvme$subsystem", 00:30:05.351 "trtype": "$TEST_TRANSPORT", 00:30:05.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.351 "adrfam": "ipv4", 00:30:05.351 "trsvcid": "$NVMF_PORT", 00:30:05.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.351 "hdgst": ${hdgst:-false}, 00:30:05.351 "ddgst": ${ddgst:-false} 00:30:05.351 }, 00:30:05.351 "method": "bdev_nvme_attach_controller" 00:30:05.351 } 00:30:05.351 EOF 00:30:05.351 )") 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1394424 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:05.351 { 00:30:05.351 "params": { 00:30:05.351 "name": "Nvme$subsystem", 00:30:05.351 "trtype": "$TEST_TRANSPORT", 00:30:05.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.351 "adrfam": "ipv4", 00:30:05.351 "trsvcid": "$NVMF_PORT", 00:30:05.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.351 "hdgst": ${hdgst:-false}, 00:30:05.351 "ddgst": ${ddgst:-false} 00:30:05.351 }, 00:30:05.351 "method": "bdev_nvme_attach_controller" 00:30:05.351 } 00:30:05.351 EOF 00:30:05.351 )") 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:05.351 { 00:30:05.351 "params": { 00:30:05.351 "name": "Nvme$subsystem", 00:30:05.351 "trtype": "$TEST_TRANSPORT", 00:30:05.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.351 "adrfam": "ipv4", 00:30:05.351 "trsvcid": "$NVMF_PORT", 00:30:05.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.351 "hdgst": ${hdgst:-false}, 00:30:05.351 "ddgst": ${ddgst:-false} 00:30:05.351 }, 00:30:05.351 "method": "bdev_nvme_attach_controller" 00:30:05.351 } 00:30:05.351 EOF 00:30:05.351 )") 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1394417 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:05.351 "params": { 00:30:05.351 "name": "Nvme1", 00:30:05.351 "trtype": "tcp", 00:30:05.351 "traddr": "10.0.0.2", 00:30:05.351 "adrfam": "ipv4", 00:30:05.351 "trsvcid": "4420", 00:30:05.351 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:05.351 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:05.351 "hdgst": false, 00:30:05.351 "ddgst": false 00:30:05.351 }, 00:30:05.351 "method": "bdev_nvme_attach_controller" 00:30:05.351 }' 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:05.351 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:05.351 "params": { 00:30:05.351 "name": "Nvme1", 00:30:05.351 "trtype": "tcp", 00:30:05.351 "traddr": "10.0.0.2", 00:30:05.351 "adrfam": "ipv4", 00:30:05.351 "trsvcid": "4420", 00:30:05.351 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:05.351 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:05.351 "hdgst": false, 00:30:05.351 "ddgst": false 00:30:05.351 }, 00:30:05.352 "method": "bdev_nvme_attach_controller" 00:30:05.352 }' 00:30:05.352 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:05.352 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:05.352 "params": { 00:30:05.352 "name": "Nvme1", 00:30:05.352 "trtype": "tcp", 00:30:05.352 "traddr": "10.0.0.2", 00:30:05.352 "adrfam": "ipv4", 00:30:05.352 "trsvcid": "4420", 00:30:05.352 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:05.352 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:05.352 "hdgst": false, 00:30:05.352 "ddgst": false 00:30:05.352 }, 00:30:05.352 "method": "bdev_nvme_attach_controller" 00:30:05.352 }' 00:30:05.352 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:05.352 07:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:05.352 "params": { 00:30:05.352 "name": "Nvme1", 00:30:05.352 "trtype": "tcp", 00:30:05.352 "traddr": "10.0.0.2", 00:30:05.352 "adrfam": "ipv4", 00:30:05.352 "trsvcid": "4420", 00:30:05.352 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:05.352 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:05.352 "hdgst": false, 00:30:05.352 "ddgst": false 00:30:05.352 }, 00:30:05.352 "method": "bdev_nvme_attach_controller" 00:30:05.352 }' 00:30:05.352 [2024-11-20 07:26:09.394155] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:30:05.352 [2024-11-20 07:26:09.394198] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:30:05.352 [2024-11-20 07:26:09.397980] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:30:05.352 [2024-11-20 07:26:09.398031] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:30:05.352 [2024-11-20 07:26:09.400952] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:30:05.352 [2024-11-20 07:26:09.400994] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:30:05.352 [2024-11-20 07:26:09.402291] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:30:05.352 [2024-11-20 07:26:09.402335] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:30:05.352 [2024-11-20 07:26:09.589969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.352 [2024-11-20 07:26:09.634544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:05.352 [2024-11-20 07:26:09.644633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.352 [2024-11-20 07:26:09.682022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:05.352 [2024-11-20 07:26:09.744071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.352 [2024-11-20 07:26:09.785154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.352 [2024-11-20 07:26:09.791834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:05.352 [2024-11-20 07:26:09.828256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:05.610 Running I/O for 1 seconds... 00:30:05.610 Running I/O for 1 seconds... 00:30:05.610 Running I/O for 1 seconds... 00:30:05.610 Running I/O for 1 seconds... 00:30:06.612 245520.00 IOPS, 959.06 MiB/s [2024-11-20T06:26:11.168Z] 8249.00 IOPS, 32.22 MiB/s 00:30:06.612 Latency(us) 00:30:06.612 [2024-11-20T06:26:11.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:06.612 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:30:06.612 Nvme1n1 : 1.00 245139.94 957.58 0.00 0.00 519.40 227.95 1531.55 00:30:06.612 [2024-11-20T06:26:11.168Z] =================================================================================================================== 00:30:06.612 [2024-11-20T06:26:11.168Z] Total : 245139.94 957.58 0.00 0.00 519.40 227.95 1531.55 00:30:06.612 00:30:06.612 Latency(us) 00:30:06.612 [2024-11-20T06:26:11.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:06.612 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:30:06.612 Nvme1n1 : 1.02 8248.27 32.22 0.00 0.00 15400.40 3519.00 27810.06 00:30:06.612 [2024-11-20T06:26:11.168Z] =================================================================================================================== 00:30:06.612 [2024-11-20T06:26:11.168Z] Total : 8248.27 32.22 0.00 0.00 15400.40 3519.00 27810.06 00:30:06.612 11977.00 IOPS, 46.79 MiB/s 00:30:06.612 Latency(us) 00:30:06.612 [2024-11-20T06:26:11.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:06.612 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:30:06.612 Nvme1n1 : 1.01 12041.40 47.04 0.00 0.00 10596.86 1894.85 15728.64 00:30:06.612 [2024-11-20T06:26:11.168Z] =================================================================================================================== 00:30:06.612 [2024-11-20T06:26:11.168Z] Total : 12041.40 47.04 0.00 0.00 10596.86 1894.85 15728.64 00:30:06.612 7992.00 IOPS, 31.22 MiB/s 00:30:06.612 Latency(us) 00:30:06.612 [2024-11-20T06:26:11.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:06.612 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:30:06.612 Nvme1n1 : 1.01 8133.06 31.77 0.00 0.00 15706.81 2863.64 31457.28 00:30:06.612 [2024-11-20T06:26:11.168Z] =================================================================================================================== 00:30:06.612 [2024-11-20T06:26:11.168Z] Total : 8133.06 31.77 0.00 0.00 15706.81 2863.64 31457.28 00:30:06.898 07:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1394419 00:30:06.898 07:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1394421 00:30:06.898 07:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1394424 00:30:06.898 07:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:06.898 07:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:06.898 07:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:06.898 07:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:06.898 07:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:30:06.898 07:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:30:06.898 07:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:06.898 07:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:30:06.898 07:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:06.898 07:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:30:06.898 07:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:06.898 07:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:06.898 rmmod nvme_tcp 00:30:06.898 rmmod nvme_fabrics 00:30:06.898 rmmod nvme_keyring 00:30:06.898 07:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:06.898 07:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:30:06.898 07:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:30:06.898 07:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1394392 ']' 00:30:06.898 07:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1394392 00:30:06.898 07:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 1394392 ']' 00:30:06.898 07:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 1394392 00:30:06.898 07:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:30:06.898 07:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:06.898 07:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1394392 00:30:06.898 07:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:06.898 07:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:06.898 07:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1394392' 00:30:06.898 killing process with pid 1394392 00:30:06.898 07:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 1394392 00:30:06.898 07:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 1394392 00:30:07.158 07:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:07.158 07:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:07.158 07:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:07.158 07:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:30:07.158 07:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:30:07.158 07:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:07.158 07:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:30:07.158 07:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:07.158 07:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:07.158 07:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:07.158 07:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:07.158 07:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:09.064 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:09.064 00:30:09.064 real 0m10.710s 00:30:09.064 user 0m14.999s 00:30:09.064 sys 0m6.373s 00:30:09.064 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:09.064 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:09.064 ************************************ 00:30:09.064 END TEST nvmf_bdev_io_wait 00:30:09.064 ************************************ 00:30:09.064 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:09.064 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:09.064 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:09.064 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:09.064 ************************************ 00:30:09.064 START TEST nvmf_queue_depth 00:30:09.064 ************************************ 00:30:09.324 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:09.324 * Looking for test storage... 00:30:09.324 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:09.324 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:09.324 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:30:09.324 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:09.324 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:09.324 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:09.324 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:09.324 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:09.324 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:30:09.324 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:30:09.324 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:30:09.324 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:30:09.324 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:30:09.324 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:30:09.324 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:30:09.324 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:09.324 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:30:09.324 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:30:09.324 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:09.324 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:09.324 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:30:09.324 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:30:09.324 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:09.324 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:30:09.324 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:30:09.324 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:30:09.324 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:30:09.324 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:09.324 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:30:09.324 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:30:09.324 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:09.324 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:09.324 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:30:09.324 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:09.324 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:09.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.324 --rc genhtml_branch_coverage=1 00:30:09.325 --rc genhtml_function_coverage=1 00:30:09.325 --rc genhtml_legend=1 00:30:09.325 --rc geninfo_all_blocks=1 00:30:09.325 --rc geninfo_unexecuted_blocks=1 00:30:09.325 00:30:09.325 ' 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:09.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.325 --rc genhtml_branch_coverage=1 00:30:09.325 --rc genhtml_function_coverage=1 00:30:09.325 --rc genhtml_legend=1 00:30:09.325 --rc geninfo_all_blocks=1 00:30:09.325 --rc geninfo_unexecuted_blocks=1 00:30:09.325 00:30:09.325 ' 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:09.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.325 --rc genhtml_branch_coverage=1 00:30:09.325 --rc genhtml_function_coverage=1 00:30:09.325 --rc genhtml_legend=1 00:30:09.325 --rc geninfo_all_blocks=1 00:30:09.325 --rc geninfo_unexecuted_blocks=1 00:30:09.325 00:30:09.325 ' 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:09.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.325 --rc genhtml_branch_coverage=1 00:30:09.325 --rc genhtml_function_coverage=1 00:30:09.325 --rc genhtml_legend=1 00:30:09.325 --rc geninfo_all_blocks=1 00:30:09.325 --rc geninfo_unexecuted_blocks=1 00:30:09.325 00:30:09.325 ' 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:30:09.325 07:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:15.898 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:15.898 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:15.898 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:15.899 Found net devices under 0000:86:00.0: cvl_0_0 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:15.899 Found net devices under 0000:86:00.1: cvl_0_1 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:15.899 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:15.899 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.454 ms 00:30:15.899 00:30:15.899 --- 10.0.0.2 ping statistics --- 00:30:15.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:15.899 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:15.899 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:15.899 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:30:15.899 00:30:15.899 --- 10.0.0.1 ping statistics --- 00:30:15.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:15.899 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1398220 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1398220 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 1398220 ']' 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:15.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:15.899 [2024-11-20 07:26:19.752684] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:15.899 [2024-11-20 07:26:19.753687] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:30:15.899 [2024-11-20 07:26:19.753726] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:15.899 [2024-11-20 07:26:19.834761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:15.899 [2024-11-20 07:26:19.876065] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:15.899 [2024-11-20 07:26:19.876100] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:15.899 [2024-11-20 07:26:19.876110] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:15.899 [2024-11-20 07:26:19.876118] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:15.899 [2024-11-20 07:26:19.876124] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:15.899 [2024-11-20 07:26:19.876716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:15.899 [2024-11-20 07:26:19.945671] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:15.899 [2024-11-20 07:26:19.945904] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:15.899 07:26:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:15.899 07:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:15.899 07:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:15.899 07:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.899 07:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:15.899 [2024-11-20 07:26:20.009432] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:15.899 07:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.900 07:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:15.900 07:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.900 07:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:15.900 Malloc0 00:30:15.900 07:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.900 07:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:15.900 07:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.900 07:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:15.900 07:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.900 07:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:15.900 07:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.900 07:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:15.900 07:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.900 07:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:15.900 07:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.900 07:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:15.900 [2024-11-20 07:26:20.089513] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:15.900 07:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:15.900 07:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1398444 00:30:15.900 07:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:30:15.900 07:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:15.900 07:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1398444 /var/tmp/bdevperf.sock 00:30:15.900 07:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 1398444 ']' 00:30:15.900 07:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:15.900 07:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:15.900 07:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:15.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:15.900 07:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:15.900 07:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:15.900 [2024-11-20 07:26:20.141807] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:30:15.900 [2024-11-20 07:26:20.141853] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1398444 ] 00:30:15.900 [2024-11-20 07:26:20.217244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:15.900 [2024-11-20 07:26:20.260539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:15.900 07:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:15.900 07:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:30:15.900 07:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:15.900 07:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:15.900 07:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:16.159 NVMe0n1 00:30:16.159 07:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:16.159 07:26:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:16.159 Running I/O for 10 seconds... 00:30:18.034 11269.00 IOPS, 44.02 MiB/s [2024-11-20T06:26:23.973Z] 11778.50 IOPS, 46.01 MiB/s [2024-11-20T06:26:24.911Z] 11951.00 IOPS, 46.68 MiB/s [2024-11-20T06:26:25.850Z] 12024.00 IOPS, 46.97 MiB/s [2024-11-20T06:26:26.788Z] 12076.80 IOPS, 47.17 MiB/s [2024-11-20T06:26:27.725Z] 12120.00 IOPS, 47.34 MiB/s [2024-11-20T06:26:28.663Z] 12140.14 IOPS, 47.42 MiB/s [2024-11-20T06:26:29.601Z] 12164.62 IOPS, 47.52 MiB/s [2024-11-20T06:26:30.980Z] 12173.11 IOPS, 47.55 MiB/s [2024-11-20T06:26:30.980Z] 12186.00 IOPS, 47.60 MiB/s 00:30:26.424 Latency(us) 00:30:26.424 [2024-11-20T06:26:30.980Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:26.424 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:30:26.424 Verification LBA range: start 0x0 length 0x4000 00:30:26.424 NVMe0n1 : 10.06 12212.96 47.71 0.00 0.00 83580.56 19147.91 56076.02 00:30:26.424 [2024-11-20T06:26:30.980Z] =================================================================================================================== 00:30:26.424 [2024-11-20T06:26:30.980Z] Total : 12212.96 47.71 0.00 0.00 83580.56 19147.91 56076.02 00:30:26.424 { 00:30:26.424 "results": [ 00:30:26.424 { 00:30:26.424 "job": "NVMe0n1", 00:30:26.424 "core_mask": "0x1", 00:30:26.424 "workload": "verify", 00:30:26.424 "status": "finished", 00:30:26.424 "verify_range": { 00:30:26.424 "start": 0, 00:30:26.424 "length": 16384 00:30:26.424 }, 00:30:26.424 "queue_depth": 1024, 00:30:26.424 "io_size": 4096, 00:30:26.424 "runtime": 10.062422, 00:30:26.424 "iops": 12212.96423465444, 00:30:26.424 "mibps": 47.70689154161891, 00:30:26.424 "io_failed": 0, 00:30:26.424 "io_timeout": 0, 00:30:26.424 "avg_latency_us": 83580.55621311889, 00:30:26.424 "min_latency_us": 19147.909565217393, 00:30:26.424 "max_latency_us": 56076.02086956522 00:30:26.424 } 00:30:26.424 ], 00:30:26.424 "core_count": 1 00:30:26.424 } 00:30:26.424 07:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1398444 00:30:26.424 07:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 1398444 ']' 00:30:26.424 07:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 1398444 00:30:26.424 07:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:30:26.424 07:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:26.424 07:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1398444 00:30:26.424 07:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:26.424 07:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:26.424 07:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1398444' 00:30:26.424 killing process with pid 1398444 00:30:26.424 07:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 1398444 00:30:26.424 Received shutdown signal, test time was about 10.000000 seconds 00:30:26.424 00:30:26.424 Latency(us) 00:30:26.424 [2024-11-20T06:26:30.980Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:26.424 [2024-11-20T06:26:30.980Z] =================================================================================================================== 00:30:26.424 [2024-11-20T06:26:30.980Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:26.424 07:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 1398444 00:30:26.424 07:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:30:26.424 07:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:30:26.424 07:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:26.425 07:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:30:26.425 07:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:26.425 07:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:30:26.425 07:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:26.425 07:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:26.425 rmmod nvme_tcp 00:30:26.425 rmmod nvme_fabrics 00:30:26.425 rmmod nvme_keyring 00:30:26.425 07:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:26.425 07:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:30:26.425 07:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:30:26.425 07:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1398220 ']' 00:30:26.425 07:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1398220 00:30:26.425 07:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 1398220 ']' 00:30:26.425 07:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 1398220 00:30:26.425 07:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:30:26.425 07:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:26.425 07:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1398220 00:30:26.685 07:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:26.685 07:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:26.685 07:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1398220' 00:30:26.685 killing process with pid 1398220 00:30:26.685 07:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 1398220 00:30:26.685 07:26:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 1398220 00:30:26.685 07:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:26.685 07:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:26.685 07:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:26.685 07:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:30:26.685 07:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:30:26.685 07:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:26.685 07:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:30:26.685 07:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:26.685 07:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:26.685 07:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:26.685 07:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:26.685 07:26:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:29.225 00:30:29.225 real 0m19.631s 00:30:29.225 user 0m22.758s 00:30:29.225 sys 0m6.172s 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:29.225 ************************************ 00:30:29.225 END TEST nvmf_queue_depth 00:30:29.225 ************************************ 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:29.225 ************************************ 00:30:29.225 START TEST nvmf_target_multipath 00:30:29.225 ************************************ 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:29.225 * Looking for test storage... 00:30:29.225 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:29.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:29.225 --rc genhtml_branch_coverage=1 00:30:29.225 --rc genhtml_function_coverage=1 00:30:29.225 --rc genhtml_legend=1 00:30:29.225 --rc geninfo_all_blocks=1 00:30:29.225 --rc geninfo_unexecuted_blocks=1 00:30:29.225 00:30:29.225 ' 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:29.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:29.225 --rc genhtml_branch_coverage=1 00:30:29.225 --rc genhtml_function_coverage=1 00:30:29.225 --rc genhtml_legend=1 00:30:29.225 --rc geninfo_all_blocks=1 00:30:29.225 --rc geninfo_unexecuted_blocks=1 00:30:29.225 00:30:29.225 ' 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:29.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:29.225 --rc genhtml_branch_coverage=1 00:30:29.225 --rc genhtml_function_coverage=1 00:30:29.225 --rc genhtml_legend=1 00:30:29.225 --rc geninfo_all_blocks=1 00:30:29.225 --rc geninfo_unexecuted_blocks=1 00:30:29.225 00:30:29.225 ' 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:29.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:29.225 --rc genhtml_branch_coverage=1 00:30:29.225 --rc genhtml_function_coverage=1 00:30:29.225 --rc genhtml_legend=1 00:30:29.225 --rc geninfo_all_blocks=1 00:30:29.225 --rc geninfo_unexecuted_blocks=1 00:30:29.225 00:30:29.225 ' 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:29.225 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:30:29.226 07:26:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:35.799 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:35.799 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:35.799 Found net devices under 0000:86:00.0: cvl_0_0 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:35.799 Found net devices under 0000:86:00.1: cvl_0_1 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:35.799 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:35.800 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:35.800 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.440 ms 00:30:35.800 00:30:35.800 --- 10.0.0.2 ping statistics --- 00:30:35.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.800 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:35.800 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:35.800 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:30:35.800 00:30:35.800 --- 10.0.0.1 ping statistics --- 00:30:35.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.800 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:30:35.800 only one NIC for nvmf test 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:35.800 rmmod nvme_tcp 00:30:35.800 rmmod nvme_fabrics 00:30:35.800 rmmod nvme_keyring 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:35.800 07:26:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:37.178 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:37.178 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:30:37.178 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:30:37.178 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:37.178 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:37.178 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:37.178 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:37.178 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:37.178 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:37.178 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:37.178 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:37.178 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:37.178 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:37.178 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:37.178 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:37.178 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:37.178 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:37.178 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:37.178 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:37.178 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:37.178 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:37.178 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:37.178 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:37.178 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:37.178 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:37.178 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:37.178 00:30:37.178 real 0m8.290s 00:30:37.178 user 0m1.872s 00:30:37.178 sys 0m4.450s 00:30:37.178 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:37.178 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:37.178 ************************************ 00:30:37.178 END TEST nvmf_target_multipath 00:30:37.178 ************************************ 00:30:37.178 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:37.178 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:37.178 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:37.178 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:37.178 ************************************ 00:30:37.178 START TEST nvmf_zcopy 00:30:37.178 ************************************ 00:30:37.178 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:37.438 * Looking for test storage... 00:30:37.438 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:37.438 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:37.438 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:30:37.438 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:37.438 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:37.438 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:37.438 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:37.438 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:37.438 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:30:37.438 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:30:37.438 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:30:37.438 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:30:37.438 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:30:37.438 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:30:37.438 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:30:37.438 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:37.438 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:30:37.438 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:30:37.438 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:37.438 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:37.438 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:30:37.438 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:30:37.438 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:37.438 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:37.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.439 --rc genhtml_branch_coverage=1 00:30:37.439 --rc genhtml_function_coverage=1 00:30:37.439 --rc genhtml_legend=1 00:30:37.439 --rc geninfo_all_blocks=1 00:30:37.439 --rc geninfo_unexecuted_blocks=1 00:30:37.439 00:30:37.439 ' 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:37.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.439 --rc genhtml_branch_coverage=1 00:30:37.439 --rc genhtml_function_coverage=1 00:30:37.439 --rc genhtml_legend=1 00:30:37.439 --rc geninfo_all_blocks=1 00:30:37.439 --rc geninfo_unexecuted_blocks=1 00:30:37.439 00:30:37.439 ' 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:37.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.439 --rc genhtml_branch_coverage=1 00:30:37.439 --rc genhtml_function_coverage=1 00:30:37.439 --rc genhtml_legend=1 00:30:37.439 --rc geninfo_all_blocks=1 00:30:37.439 --rc geninfo_unexecuted_blocks=1 00:30:37.439 00:30:37.439 ' 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:37.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.439 --rc genhtml_branch_coverage=1 00:30:37.439 --rc genhtml_function_coverage=1 00:30:37.439 --rc genhtml_legend=1 00:30:37.439 --rc geninfo_all_blocks=1 00:30:37.439 --rc geninfo_unexecuted_blocks=1 00:30:37.439 00:30:37.439 ' 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:37.439 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:37.440 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:37.440 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:30:37.440 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:37.440 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:37.440 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:37.440 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:37.440 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:37.440 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:37.440 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:37.440 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:37.440 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:37.440 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:37.440 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:30:37.440 07:26:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:44.011 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:44.011 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:44.011 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:44.012 Found net devices under 0000:86:00.0: cvl_0_0 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:44.012 Found net devices under 0000:86:00.1: cvl_0_1 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:44.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:44.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.468 ms 00:30:44.012 00:30:44.012 --- 10.0.0.2 ping statistics --- 00:30:44.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:44.012 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:44.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:44.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:30:44.012 00:30:44.012 --- 10.0.0.1 ping statistics --- 00:30:44.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:44.012 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1407084 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1407084 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 1407084 ']' 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:44.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:44.012 07:26:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:44.012 [2024-11-20 07:26:47.831417] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:44.012 [2024-11-20 07:26:47.832355] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:30:44.012 [2024-11-20 07:26:47.832389] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:44.012 [2024-11-20 07:26:47.909847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.012 [2024-11-20 07:26:47.950363] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:44.012 [2024-11-20 07:26:47.950399] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:44.012 [2024-11-20 07:26:47.950409] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:44.012 [2024-11-20 07:26:47.950417] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:44.012 [2024-11-20 07:26:47.950423] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:44.012 [2024-11-20 07:26:47.951021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:44.012 [2024-11-20 07:26:48.018285] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:44.012 [2024-11-20 07:26:48.018516] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:44.012 07:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:44.012 07:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:30:44.012 07:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:44.012 07:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:44.012 07:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:44.013 07:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:44.013 07:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:30:44.013 07:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:30:44.013 07:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.013 07:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:44.013 [2024-11-20 07:26:48.083696] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:44.013 07:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.013 07:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:44.013 07:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.013 07:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:44.013 07:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.013 07:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:44.013 07:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.013 07:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:44.013 [2024-11-20 07:26:48.111910] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:44.013 07:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.013 07:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:44.013 07:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.013 07:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:44.013 07:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.013 07:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:30:44.013 07:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.013 07:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:44.013 malloc0 00:30:44.013 07:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.013 07:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:30:44.013 07:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:44.013 07:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:44.013 07:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:44.013 07:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:30:44.013 07:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:30:44.013 07:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:44.013 07:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:44.013 07:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:44.013 07:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:44.013 { 00:30:44.013 "params": { 00:30:44.013 "name": "Nvme$subsystem", 00:30:44.013 "trtype": "$TEST_TRANSPORT", 00:30:44.013 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:44.013 "adrfam": "ipv4", 00:30:44.013 "trsvcid": "$NVMF_PORT", 00:30:44.013 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:44.013 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:44.013 "hdgst": ${hdgst:-false}, 00:30:44.013 "ddgst": ${ddgst:-false} 00:30:44.013 }, 00:30:44.013 "method": "bdev_nvme_attach_controller" 00:30:44.013 } 00:30:44.013 EOF 00:30:44.013 )") 00:30:44.013 07:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:44.013 07:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:44.013 07:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:44.013 07:26:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:44.013 "params": { 00:30:44.013 "name": "Nvme1", 00:30:44.013 "trtype": "tcp", 00:30:44.013 "traddr": "10.0.0.2", 00:30:44.013 "adrfam": "ipv4", 00:30:44.013 "trsvcid": "4420", 00:30:44.013 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:44.013 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:44.013 "hdgst": false, 00:30:44.013 "ddgst": false 00:30:44.013 }, 00:30:44.013 "method": "bdev_nvme_attach_controller" 00:30:44.013 }' 00:30:44.013 [2024-11-20 07:26:48.205056] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:30:44.013 [2024-11-20 07:26:48.205102] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1407104 ] 00:30:44.013 [2024-11-20 07:26:48.281936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.013 [2024-11-20 07:26:48.323645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:44.013 Running I/O for 10 seconds... 00:30:46.324 8296.00 IOPS, 64.81 MiB/s [2024-11-20T06:26:51.815Z] 8318.00 IOPS, 64.98 MiB/s [2024-11-20T06:26:52.751Z] 8359.33 IOPS, 65.31 MiB/s [2024-11-20T06:26:53.687Z] 8370.50 IOPS, 65.39 MiB/s [2024-11-20T06:26:54.623Z] 8383.00 IOPS, 65.49 MiB/s [2024-11-20T06:26:55.560Z] 8388.83 IOPS, 65.54 MiB/s [2024-11-20T06:26:56.938Z] 8392.86 IOPS, 65.57 MiB/s [2024-11-20T06:26:57.872Z] 8386.62 IOPS, 65.52 MiB/s [2024-11-20T06:26:58.809Z] 8384.89 IOPS, 65.51 MiB/s [2024-11-20T06:26:58.809Z] 8387.90 IOPS, 65.53 MiB/s 00:30:54.253 Latency(us) 00:30:54.253 [2024-11-20T06:26:58.809Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:54.253 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:30:54.253 Verification LBA range: start 0x0 length 0x1000 00:30:54.253 Nvme1n1 : 10.05 8356.41 65.28 0.00 0.00 15221.29 2279.51 44450.50 00:30:54.253 [2024-11-20T06:26:58.809Z] =================================================================================================================== 00:30:54.253 [2024-11-20T06:26:58.809Z] Total : 8356.41 65.28 0.00 0.00 15221.29 2279.51 44450.50 00:30:54.253 07:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1408718 00:30:54.253 07:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:30:54.253 07:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:54.253 07:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:30:54.253 07:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:30:54.253 07:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:54.253 07:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:54.253 07:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:54.253 07:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:54.253 { 00:30:54.253 "params": { 00:30:54.253 "name": "Nvme$subsystem", 00:30:54.253 "trtype": "$TEST_TRANSPORT", 00:30:54.253 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:54.253 "adrfam": "ipv4", 00:30:54.253 "trsvcid": "$NVMF_PORT", 00:30:54.253 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:54.253 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:54.253 "hdgst": ${hdgst:-false}, 00:30:54.253 "ddgst": ${ddgst:-false} 00:30:54.253 }, 00:30:54.253 "method": "bdev_nvme_attach_controller" 00:30:54.253 } 00:30:54.253 EOF 00:30:54.253 )") 00:30:54.253 07:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:54.253 [2024-11-20 07:26:58.763374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.253 [2024-11-20 07:26:58.763406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.253 07:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:54.253 07:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:54.253 07:26:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:54.253 "params": { 00:30:54.253 "name": "Nvme1", 00:30:54.253 "trtype": "tcp", 00:30:54.253 "traddr": "10.0.0.2", 00:30:54.253 "adrfam": "ipv4", 00:30:54.253 "trsvcid": "4420", 00:30:54.253 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:54.253 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:54.253 "hdgst": false, 00:30:54.253 "ddgst": false 00:30:54.253 }, 00:30:54.253 "method": "bdev_nvme_attach_controller" 00:30:54.253 }' 00:30:54.253 [2024-11-20 07:26:58.771349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.253 [2024-11-20 07:26:58.771369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.253 [2024-11-20 07:26:58.779341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.253 [2024-11-20 07:26:58.779353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.253 [2024-11-20 07:26:58.787338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.253 [2024-11-20 07:26:58.787349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.253 [2024-11-20 07:26:58.795337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.253 [2024-11-20 07:26:58.795348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.253 [2024-11-20 07:26:58.802507] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:30:54.254 [2024-11-20 07:26:58.802548] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1408718 ] 00:30:54.514 [2024-11-20 07:26:58.807353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.514 [2024-11-20 07:26:58.807372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.514 [2024-11-20 07:26:58.819342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.514 [2024-11-20 07:26:58.819354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.514 [2024-11-20 07:26:58.831340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.514 [2024-11-20 07:26:58.831351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.514 [2024-11-20 07:26:58.843340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.514 [2024-11-20 07:26:58.843351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.514 [2024-11-20 07:26:58.855341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.514 [2024-11-20 07:26:58.855352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.514 [2024-11-20 07:26:58.867336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.514 [2024-11-20 07:26:58.867348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.514 [2024-11-20 07:26:58.876760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:54.514 [2024-11-20 07:26:58.879341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.514 [2024-11-20 07:26:58.879358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.514 [2024-11-20 07:26:58.891345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.514 [2024-11-20 07:26:58.891361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.514 [2024-11-20 07:26:58.903357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.514 [2024-11-20 07:26:58.903371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.514 [2024-11-20 07:26:58.915340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.514 [2024-11-20 07:26:58.915351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.514 [2024-11-20 07:26:58.920046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:54.514 [2024-11-20 07:26:58.927341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.514 [2024-11-20 07:26:58.927356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.514 [2024-11-20 07:26:58.939354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.514 [2024-11-20 07:26:58.939377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.514 [2024-11-20 07:26:58.951350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.514 [2024-11-20 07:26:58.951370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.514 [2024-11-20 07:26:58.963343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.514 [2024-11-20 07:26:58.963359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.514 [2024-11-20 07:26:58.975351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.514 [2024-11-20 07:26:58.975370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.514 [2024-11-20 07:26:58.987348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.514 [2024-11-20 07:26:58.987363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.514 [2024-11-20 07:26:58.999342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.514 [2024-11-20 07:26:58.999355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.514 [2024-11-20 07:26:59.011347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.514 [2024-11-20 07:26:59.011367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.514 [2024-11-20 07:26:59.023345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.514 [2024-11-20 07:26:59.023361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.514 [2024-11-20 07:26:59.035361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.514 [2024-11-20 07:26:59.035378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.514 [2024-11-20 07:26:59.047342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.514 [2024-11-20 07:26:59.047359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.514 [2024-11-20 07:26:59.059348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.514 [2024-11-20 07:26:59.059363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.773 [2024-11-20 07:26:59.071346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.773 [2024-11-20 07:26:59.071362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.773 [2024-11-20 07:26:59.083342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.773 [2024-11-20 07:26:59.083354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.773 [2024-11-20 07:26:59.095343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.773 [2024-11-20 07:26:59.095363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.773 [2024-11-20 07:26:59.107338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.773 [2024-11-20 07:26:59.107350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.773 [2024-11-20 07:26:59.119339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.773 [2024-11-20 07:26:59.119351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.773 [2024-11-20 07:26:59.131344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.773 [2024-11-20 07:26:59.131360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.773 [2024-11-20 07:26:59.143339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.773 [2024-11-20 07:26:59.143351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.773 [2024-11-20 07:26:59.155339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.773 [2024-11-20 07:26:59.155350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.773 [2024-11-20 07:26:59.167337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.773 [2024-11-20 07:26:59.167348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.773 [2024-11-20 07:26:59.179340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.773 [2024-11-20 07:26:59.179355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.773 [2024-11-20 07:26:59.191346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.773 [2024-11-20 07:26:59.191364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.773 Running I/O for 5 seconds... 00:30:54.773 [2024-11-20 07:26:59.208415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.773 [2024-11-20 07:26:59.208435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.773 [2024-11-20 07:26:59.223283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.773 [2024-11-20 07:26:59.223303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.773 [2024-11-20 07:26:59.236876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.773 [2024-11-20 07:26:59.236895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.773 [2024-11-20 07:26:59.252987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.773 [2024-11-20 07:26:59.253010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.773 [2024-11-20 07:26:59.268094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.773 [2024-11-20 07:26:59.268114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.773 [2024-11-20 07:26:59.279509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.773 [2024-11-20 07:26:59.279529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.773 [2024-11-20 07:26:59.293310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.773 [2024-11-20 07:26:59.293331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.773 [2024-11-20 07:26:59.308486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.773 [2024-11-20 07:26:59.308506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:54.773 [2024-11-20 07:26:59.319373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:54.773 [2024-11-20 07:26:59.319393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.032 [2024-11-20 07:26:59.333066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.032 [2024-11-20 07:26:59.333087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.032 [2024-11-20 07:26:59.348478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.032 [2024-11-20 07:26:59.348502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.032 [2024-11-20 07:26:59.359042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.032 [2024-11-20 07:26:59.359061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.032 [2024-11-20 07:26:59.373343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.032 [2024-11-20 07:26:59.373363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.032 [2024-11-20 07:26:59.388191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.032 [2024-11-20 07:26:59.388210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.032 [2024-11-20 07:26:59.402728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.032 [2024-11-20 07:26:59.402747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.032 [2024-11-20 07:26:59.417281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.032 [2024-11-20 07:26:59.417300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.032 [2024-11-20 07:26:59.432521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.032 [2024-11-20 07:26:59.432540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.032 [2024-11-20 07:26:59.447672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.032 [2024-11-20 07:26:59.447691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.032 [2024-11-20 07:26:59.462997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.032 [2024-11-20 07:26:59.463016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.032 [2024-11-20 07:26:59.475120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.032 [2024-11-20 07:26:59.475139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.032 [2024-11-20 07:26:59.489803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.032 [2024-11-20 07:26:59.489822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.032 [2024-11-20 07:26:59.505182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.032 [2024-11-20 07:26:59.505211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.032 [2024-11-20 07:26:59.520484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.032 [2024-11-20 07:26:59.520503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.032 [2024-11-20 07:26:59.535852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.032 [2024-11-20 07:26:59.535870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.032 [2024-11-20 07:26:59.548234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.032 [2024-11-20 07:26:59.548252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.032 [2024-11-20 07:26:59.560807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.032 [2024-11-20 07:26:59.560825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.032 [2024-11-20 07:26:59.575409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.032 [2024-11-20 07:26:59.575428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.292 [2024-11-20 07:26:59.587804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.292 [2024-11-20 07:26:59.587829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.292 [2024-11-20 07:26:59.601244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.292 [2024-11-20 07:26:59.601263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.292 [2024-11-20 07:26:59.616591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.292 [2024-11-20 07:26:59.616618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.292 [2024-11-20 07:26:59.631354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.292 [2024-11-20 07:26:59.631374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.292 [2024-11-20 07:26:59.643734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.292 [2024-11-20 07:26:59.643752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.292 [2024-11-20 07:26:59.657389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.292 [2024-11-20 07:26:59.657409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.292 [2024-11-20 07:26:59.672198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.292 [2024-11-20 07:26:59.672216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.292 [2024-11-20 07:26:59.687120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.292 [2024-11-20 07:26:59.687140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.292 [2024-11-20 07:26:59.698519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.292 [2024-11-20 07:26:59.698538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.292 [2024-11-20 07:26:59.713455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.292 [2024-11-20 07:26:59.713475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.292 [2024-11-20 07:26:59.728110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.292 [2024-11-20 07:26:59.728129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.292 [2024-11-20 07:26:59.743023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.292 [2024-11-20 07:26:59.743042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.292 [2024-11-20 07:26:59.757317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.292 [2024-11-20 07:26:59.757336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.292 [2024-11-20 07:26:59.772063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.292 [2024-11-20 07:26:59.772081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.292 [2024-11-20 07:26:59.787961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.292 [2024-11-20 07:26:59.787981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.292 [2024-11-20 07:26:59.803236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.292 [2024-11-20 07:26:59.803255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.292 [2024-11-20 07:26:59.817072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.292 [2024-11-20 07:26:59.817092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.292 [2024-11-20 07:26:59.832549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.292 [2024-11-20 07:26:59.832569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.551 [2024-11-20 07:26:59.847701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.551 [2024-11-20 07:26:59.847721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.551 [2024-11-20 07:26:59.863512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.551 [2024-11-20 07:26:59.863531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.551 [2024-11-20 07:26:59.874668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.551 [2024-11-20 07:26:59.874687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.551 [2024-11-20 07:26:59.889079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.551 [2024-11-20 07:26:59.889100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.551 [2024-11-20 07:26:59.904280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.551 [2024-11-20 07:26:59.904299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.551 [2024-11-20 07:26:59.914799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.551 [2024-11-20 07:26:59.914818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.551 [2024-11-20 07:26:59.929263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.551 [2024-11-20 07:26:59.929282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.551 [2024-11-20 07:26:59.944287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.552 [2024-11-20 07:26:59.944306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.552 [2024-11-20 07:26:59.959211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.552 [2024-11-20 07:26:59.959231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.552 [2024-11-20 07:26:59.973703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.552 [2024-11-20 07:26:59.973722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.552 [2024-11-20 07:26:59.989097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.552 [2024-11-20 07:26:59.989116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.552 [2024-11-20 07:27:00.004313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.552 [2024-11-20 07:27:00.004333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.552 [2024-11-20 07:27:00.019730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.552 [2024-11-20 07:27:00.019750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.552 [2024-11-20 07:27:00.035278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.552 [2024-11-20 07:27:00.035298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.552 [2024-11-20 07:27:00.048497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.552 [2024-11-20 07:27:00.048517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.552 [2024-11-20 07:27:00.060157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.552 [2024-11-20 07:27:00.060177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.552 [2024-11-20 07:27:00.077502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.552 [2024-11-20 07:27:00.077522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.552 [2024-11-20 07:27:00.092934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.552 [2024-11-20 07:27:00.092960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.811 [2024-11-20 07:27:00.108512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.811 [2024-11-20 07:27:00.108532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.811 [2024-11-20 07:27:00.123572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.811 [2024-11-20 07:27:00.123592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.811 [2024-11-20 07:27:00.134603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.811 [2024-11-20 07:27:00.134623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.811 [2024-11-20 07:27:00.149631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.811 [2024-11-20 07:27:00.149651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.811 [2024-11-20 07:27:00.164693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.811 [2024-11-20 07:27:00.164713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.811 [2024-11-20 07:27:00.180260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.811 [2024-11-20 07:27:00.180280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.811 [2024-11-20 07:27:00.195725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.811 [2024-11-20 07:27:00.195744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.811 16303.00 IOPS, 127.37 MiB/s [2024-11-20T06:27:00.367Z] [2024-11-20 07:27:00.211372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.811 [2024-11-20 07:27:00.211392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.811 [2024-11-20 07:27:00.225203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.811 [2024-11-20 07:27:00.225223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.811 [2024-11-20 07:27:00.240398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.811 [2024-11-20 07:27:00.240417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.811 [2024-11-20 07:27:00.255637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.811 [2024-11-20 07:27:00.255656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.811 [2024-11-20 07:27:00.267035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.811 [2024-11-20 07:27:00.267054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.811 [2024-11-20 07:27:00.281589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.811 [2024-11-20 07:27:00.281609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.811 [2024-11-20 07:27:00.297203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.811 [2024-11-20 07:27:00.297223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.811 [2024-11-20 07:27:00.312088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.811 [2024-11-20 07:27:00.312107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.811 [2024-11-20 07:27:00.327041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.811 [2024-11-20 07:27:00.327060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.811 [2024-11-20 07:27:00.340372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.811 [2024-11-20 07:27:00.340392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:55.811 [2024-11-20 07:27:00.351939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:55.811 [2024-11-20 07:27:00.351964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.070 [2024-11-20 07:27:00.364800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.070 [2024-11-20 07:27:00.364821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.070 [2024-11-20 07:27:00.380138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.070 [2024-11-20 07:27:00.380159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.070 [2024-11-20 07:27:00.391534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.070 [2024-11-20 07:27:00.391555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.070 [2024-11-20 07:27:00.405069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.070 [2024-11-20 07:27:00.405090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.070 [2024-11-20 07:27:00.420060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.070 [2024-11-20 07:27:00.420080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.070 [2024-11-20 07:27:00.430520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.070 [2024-11-20 07:27:00.430541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.070 [2024-11-20 07:27:00.445204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.070 [2024-11-20 07:27:00.445224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.070 [2024-11-20 07:27:00.460480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.070 [2024-11-20 07:27:00.460503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.070 [2024-11-20 07:27:00.475452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.070 [2024-11-20 07:27:00.475471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.070 [2024-11-20 07:27:00.488911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.070 [2024-11-20 07:27:00.488930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.070 [2024-11-20 07:27:00.504444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.070 [2024-11-20 07:27:00.504465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.070 [2024-11-20 07:27:00.519672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.070 [2024-11-20 07:27:00.519691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.070 [2024-11-20 07:27:00.530833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.070 [2024-11-20 07:27:00.530853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.070 [2024-11-20 07:27:00.545407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.070 [2024-11-20 07:27:00.545426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.070 [2024-11-20 07:27:00.560539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.070 [2024-11-20 07:27:00.560559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.070 [2024-11-20 07:27:00.575189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.070 [2024-11-20 07:27:00.575210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.070 [2024-11-20 07:27:00.588092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.070 [2024-11-20 07:27:00.588111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.070 [2024-11-20 07:27:00.600775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.070 [2024-11-20 07:27:00.600794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.070 [2024-11-20 07:27:00.615672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.070 [2024-11-20 07:27:00.615692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.329 [2024-11-20 07:27:00.627376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.329 [2024-11-20 07:27:00.627396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.329 [2024-11-20 07:27:00.641376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.329 [2024-11-20 07:27:00.641396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.329 [2024-11-20 07:27:00.656934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.329 [2024-11-20 07:27:00.656960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.329 [2024-11-20 07:27:00.672236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.329 [2024-11-20 07:27:00.672255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.329 [2024-11-20 07:27:00.687210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.329 [2024-11-20 07:27:00.687235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.329 [2024-11-20 07:27:00.700406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.329 [2024-11-20 07:27:00.700426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.329 [2024-11-20 07:27:00.715317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.329 [2024-11-20 07:27:00.715337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.329 [2024-11-20 07:27:00.726953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.329 [2024-11-20 07:27:00.726973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.329 [2024-11-20 07:27:00.741495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.329 [2024-11-20 07:27:00.741516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.329 [2024-11-20 07:27:00.756279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.329 [2024-11-20 07:27:00.756298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.329 [2024-11-20 07:27:00.771111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.329 [2024-11-20 07:27:00.771131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.329 [2024-11-20 07:27:00.782845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.329 [2024-11-20 07:27:00.782865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.330 [2024-11-20 07:27:00.797029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.330 [2024-11-20 07:27:00.797049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.330 [2024-11-20 07:27:00.811913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.330 [2024-11-20 07:27:00.811932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.330 [2024-11-20 07:27:00.828494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.330 [2024-11-20 07:27:00.828514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.330 [2024-11-20 07:27:00.843652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.330 [2024-11-20 07:27:00.843671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.330 [2024-11-20 07:27:00.855686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.330 [2024-11-20 07:27:00.855705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.330 [2024-11-20 07:27:00.869385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.330 [2024-11-20 07:27:00.869404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.589 [2024-11-20 07:27:00.884835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.589 [2024-11-20 07:27:00.884855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.589 [2024-11-20 07:27:00.900009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.589 [2024-11-20 07:27:00.900029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.589 [2024-11-20 07:27:00.915301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.589 [2024-11-20 07:27:00.915325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.589 [2024-11-20 07:27:00.926809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.589 [2024-11-20 07:27:00.926829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.589 [2024-11-20 07:27:00.940894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.589 [2024-11-20 07:27:00.940914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.589 [2024-11-20 07:27:00.956274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.589 [2024-11-20 07:27:00.956297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.589 [2024-11-20 07:27:00.971502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.589 [2024-11-20 07:27:00.971522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.589 [2024-11-20 07:27:00.983184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.589 [2024-11-20 07:27:00.983205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.589 [2024-11-20 07:27:00.997562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.589 [2024-11-20 07:27:00.997582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.589 [2024-11-20 07:27:01.012438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.589 [2024-11-20 07:27:01.012457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.589 [2024-11-20 07:27:01.027309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.589 [2024-11-20 07:27:01.027328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.589 [2024-11-20 07:27:01.037912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.589 [2024-11-20 07:27:01.037931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.589 [2024-11-20 07:27:01.053257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.589 [2024-11-20 07:27:01.053276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.589 [2024-11-20 07:27:01.068223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.589 [2024-11-20 07:27:01.068243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.589 [2024-11-20 07:27:01.083232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.589 [2024-11-20 07:27:01.083251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.589 [2024-11-20 07:27:01.095732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.589 [2024-11-20 07:27:01.095751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.589 [2024-11-20 07:27:01.109477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.589 [2024-11-20 07:27:01.109497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.589 [2024-11-20 07:27:01.124639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.589 [2024-11-20 07:27:01.124658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.848 [2024-11-20 07:27:01.140278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.848 [2024-11-20 07:27:01.140298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.848 [2024-11-20 07:27:01.155119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.848 [2024-11-20 07:27:01.155140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.848 [2024-11-20 07:27:01.166831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.848 [2024-11-20 07:27:01.166851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.848 [2024-11-20 07:27:01.181070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.848 [2024-11-20 07:27:01.181088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.848 [2024-11-20 07:27:01.195905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.848 [2024-11-20 07:27:01.195924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.848 16330.00 IOPS, 127.58 MiB/s [2024-11-20T06:27:01.404Z] [2024-11-20 07:27:01.210700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.848 [2024-11-20 07:27:01.210720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.848 [2024-11-20 07:27:01.225513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.848 [2024-11-20 07:27:01.225536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.848 [2024-11-20 07:27:01.240426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.848 [2024-11-20 07:27:01.240446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.848 [2024-11-20 07:27:01.255696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.848 [2024-11-20 07:27:01.255715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.848 [2024-11-20 07:27:01.272099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.848 [2024-11-20 07:27:01.272119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.848 [2024-11-20 07:27:01.287092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.848 [2024-11-20 07:27:01.287112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.848 [2024-11-20 07:27:01.301371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.848 [2024-11-20 07:27:01.301395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.848 [2024-11-20 07:27:01.315926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.848 [2024-11-20 07:27:01.315945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.848 [2024-11-20 07:27:01.330854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.848 [2024-11-20 07:27:01.330873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.848 [2024-11-20 07:27:01.345425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.848 [2024-11-20 07:27:01.345445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.848 [2024-11-20 07:27:01.360000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.848 [2024-11-20 07:27:01.360019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.848 [2024-11-20 07:27:01.375458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.848 [2024-11-20 07:27:01.375478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:56.848 [2024-11-20 07:27:01.388874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:56.848 [2024-11-20 07:27:01.388894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.108 [2024-11-20 07:27:01.404059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.108 [2024-11-20 07:27:01.404079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.108 [2024-11-20 07:27:01.416264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.108 [2024-11-20 07:27:01.416283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.108 [2024-11-20 07:27:01.428874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.108 [2024-11-20 07:27:01.428893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.108 [2024-11-20 07:27:01.443870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.108 [2024-11-20 07:27:01.443889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.108 [2024-11-20 07:27:01.459658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.108 [2024-11-20 07:27:01.459677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.108 [2024-11-20 07:27:01.472064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.108 [2024-11-20 07:27:01.472083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.108 [2024-11-20 07:27:01.484833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.108 [2024-11-20 07:27:01.484853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.108 [2024-11-20 07:27:01.500211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.108 [2024-11-20 07:27:01.500231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.108 [2024-11-20 07:27:01.515320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.108 [2024-11-20 07:27:01.515340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.108 [2024-11-20 07:27:01.529318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.108 [2024-11-20 07:27:01.529337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.108 [2024-11-20 07:27:01.544506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.108 [2024-11-20 07:27:01.544525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.108 [2024-11-20 07:27:01.559025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.108 [2024-11-20 07:27:01.559045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.108 [2024-11-20 07:27:01.572181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.108 [2024-11-20 07:27:01.572201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.108 [2024-11-20 07:27:01.587582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.108 [2024-11-20 07:27:01.587601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.108 [2024-11-20 07:27:01.600249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.108 [2024-11-20 07:27:01.600268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.108 [2024-11-20 07:27:01.615346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.108 [2024-11-20 07:27:01.615369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.108 [2024-11-20 07:27:01.626772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.108 [2024-11-20 07:27:01.626792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.108 [2024-11-20 07:27:01.641268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.108 [2024-11-20 07:27:01.641286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.108 [2024-11-20 07:27:01.656642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.108 [2024-11-20 07:27:01.656662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.368 [2024-11-20 07:27:01.672034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.368 [2024-11-20 07:27:01.672054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.368 [2024-11-20 07:27:01.683409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.368 [2024-11-20 07:27:01.683429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.368 [2024-11-20 07:27:01.697824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.368 [2024-11-20 07:27:01.697843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.368 [2024-11-20 07:27:01.712806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.368 [2024-11-20 07:27:01.712825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.368 [2024-11-20 07:27:01.728252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.368 [2024-11-20 07:27:01.728271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.368 [2024-11-20 07:27:01.743194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.368 [2024-11-20 07:27:01.743213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.368 [2024-11-20 07:27:01.754396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.368 [2024-11-20 07:27:01.754415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.368 [2024-11-20 07:27:01.769734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.368 [2024-11-20 07:27:01.769754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.368 [2024-11-20 07:27:01.785125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.368 [2024-11-20 07:27:01.785146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.368 [2024-11-20 07:27:01.800043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.368 [2024-11-20 07:27:01.800063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.368 [2024-11-20 07:27:01.815127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.368 [2024-11-20 07:27:01.815147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.368 [2024-11-20 07:27:01.826224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.368 [2024-11-20 07:27:01.826244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.368 [2024-11-20 07:27:01.841560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.368 [2024-11-20 07:27:01.841581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.368 [2024-11-20 07:27:01.856229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.368 [2024-11-20 07:27:01.856250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.368 [2024-11-20 07:27:01.871562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.368 [2024-11-20 07:27:01.871583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.368 [2024-11-20 07:27:01.882192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.368 [2024-11-20 07:27:01.882215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.368 [2024-11-20 07:27:01.897184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.368 [2024-11-20 07:27:01.897204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.368 [2024-11-20 07:27:01.912094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.368 [2024-11-20 07:27:01.912114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.628 [2024-11-20 07:27:01.927557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.628 [2024-11-20 07:27:01.927579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.628 [2024-11-20 07:27:01.939304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.628 [2024-11-20 07:27:01.939324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.628 [2024-11-20 07:27:01.953932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.628 [2024-11-20 07:27:01.953958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.628 [2024-11-20 07:27:01.969255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.628 [2024-11-20 07:27:01.969275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.628 [2024-11-20 07:27:01.984848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.628 [2024-11-20 07:27:01.984868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.628 [2024-11-20 07:27:02.000289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.628 [2024-11-20 07:27:02.000309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.628 [2024-11-20 07:27:02.015322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.628 [2024-11-20 07:27:02.015342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.628 [2024-11-20 07:27:02.027289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.628 [2024-11-20 07:27:02.027309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.628 [2024-11-20 07:27:02.041617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.628 [2024-11-20 07:27:02.041637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.628 [2024-11-20 07:27:02.056549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.628 [2024-11-20 07:27:02.056570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.628 [2024-11-20 07:27:02.071797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.628 [2024-11-20 07:27:02.071817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.628 [2024-11-20 07:27:02.087241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.628 [2024-11-20 07:27:02.087261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.628 [2024-11-20 07:27:02.100375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.628 [2024-11-20 07:27:02.100394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.628 [2024-11-20 07:27:02.113059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.628 [2024-11-20 07:27:02.113077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.628 [2024-11-20 07:27:02.127681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.628 [2024-11-20 07:27:02.127700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.628 [2024-11-20 07:27:02.143486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.628 [2024-11-20 07:27:02.143505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.628 [2024-11-20 07:27:02.156503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.628 [2024-11-20 07:27:02.156523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.628 [2024-11-20 07:27:02.171742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.629 [2024-11-20 07:27:02.171762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.888 [2024-11-20 07:27:02.182759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.888 [2024-11-20 07:27:02.182779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.888 [2024-11-20 07:27:02.197267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.888 [2024-11-20 07:27:02.197286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.888 16362.33 IOPS, 127.83 MiB/s [2024-11-20T06:27:02.444Z] [2024-11-20 07:27:02.212118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.888 [2024-11-20 07:27:02.212137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.888 [2024-11-20 07:27:02.226623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.888 [2024-11-20 07:27:02.226643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.888 [2024-11-20 07:27:02.240650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.888 [2024-11-20 07:27:02.240670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.888 [2024-11-20 07:27:02.256146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.888 [2024-11-20 07:27:02.256165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.888 [2024-11-20 07:27:02.271408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.888 [2024-11-20 07:27:02.271429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.888 [2024-11-20 07:27:02.285128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.888 [2024-11-20 07:27:02.285149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.888 [2024-11-20 07:27:02.300563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.888 [2024-11-20 07:27:02.300587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.888 [2024-11-20 07:27:02.315711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.888 [2024-11-20 07:27:02.315730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.888 [2024-11-20 07:27:02.331295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.889 [2024-11-20 07:27:02.331314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.889 [2024-11-20 07:27:02.344554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.889 [2024-11-20 07:27:02.344573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.889 [2024-11-20 07:27:02.355190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.889 [2024-11-20 07:27:02.355210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.889 [2024-11-20 07:27:02.369470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.889 [2024-11-20 07:27:02.369489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.889 [2024-11-20 07:27:02.384865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.889 [2024-11-20 07:27:02.384885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.889 [2024-11-20 07:27:02.399536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.889 [2024-11-20 07:27:02.399555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.889 [2024-11-20 07:27:02.410306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.889 [2024-11-20 07:27:02.410325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:57.889 [2024-11-20 07:27:02.425256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:57.889 [2024-11-20 07:27:02.425275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.148 [2024-11-20 07:27:02.440729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.148 [2024-11-20 07:27:02.440749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.148 [2024-11-20 07:27:02.455983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.148 [2024-11-20 07:27:02.456002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.148 [2024-11-20 07:27:02.468624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.148 [2024-11-20 07:27:02.468643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.148 [2024-11-20 07:27:02.479936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.148 [2024-11-20 07:27:02.479960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.148 [2024-11-20 07:27:02.493293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.148 [2024-11-20 07:27:02.493312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.148 [2024-11-20 07:27:02.508257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.148 [2024-11-20 07:27:02.508276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.148 [2024-11-20 07:27:02.518566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.148 [2024-11-20 07:27:02.518585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.148 [2024-11-20 07:27:02.533002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.148 [2024-11-20 07:27:02.533021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.148 [2024-11-20 07:27:02.547884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.148 [2024-11-20 07:27:02.547903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.148 [2024-11-20 07:27:02.563607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.148 [2024-11-20 07:27:02.563631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.148 [2024-11-20 07:27:02.575428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.148 [2024-11-20 07:27:02.575447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.148 [2024-11-20 07:27:02.588941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.148 [2024-11-20 07:27:02.588967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.148 [2024-11-20 07:27:02.603977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.148 [2024-11-20 07:27:02.603996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.148 [2024-11-20 07:27:02.620392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.148 [2024-11-20 07:27:02.620411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.148 [2024-11-20 07:27:02.635774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.148 [2024-11-20 07:27:02.635793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.148 [2024-11-20 07:27:02.650650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.148 [2024-11-20 07:27:02.650670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.148 [2024-11-20 07:27:02.664616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.148 [2024-11-20 07:27:02.664636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.148 [2024-11-20 07:27:02.674718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.148 [2024-11-20 07:27:02.674737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.148 [2024-11-20 07:27:02.689207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.148 [2024-11-20 07:27:02.689226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.407 [2024-11-20 07:27:02.704725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.407 [2024-11-20 07:27:02.704745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.407 [2024-11-20 07:27:02.719534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.407 [2024-11-20 07:27:02.719553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.407 [2024-11-20 07:27:02.731582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.407 [2024-11-20 07:27:02.731600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.407 [2024-11-20 07:27:02.745207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.407 [2024-11-20 07:27:02.745226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.407 [2024-11-20 07:27:02.760457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.407 [2024-11-20 07:27:02.760476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.407 [2024-11-20 07:27:02.775639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.407 [2024-11-20 07:27:02.775658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.407 [2024-11-20 07:27:02.791430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.407 [2024-11-20 07:27:02.791449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.407 [2024-11-20 07:27:02.804749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.407 [2024-11-20 07:27:02.804769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.407 [2024-11-20 07:27:02.819697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.407 [2024-11-20 07:27:02.819716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.407 [2024-11-20 07:27:02.835251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.407 [2024-11-20 07:27:02.835275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.407 [2024-11-20 07:27:02.848788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.407 [2024-11-20 07:27:02.848806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.407 [2024-11-20 07:27:02.863765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.407 [2024-11-20 07:27:02.863783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.407 [2024-11-20 07:27:02.875908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.407 [2024-11-20 07:27:02.875926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.407 [2024-11-20 07:27:02.891284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.407 [2024-11-20 07:27:02.891303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.407 [2024-11-20 07:27:02.905393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.407 [2024-11-20 07:27:02.905412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.407 [2024-11-20 07:27:02.920385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.407 [2024-11-20 07:27:02.920404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.407 [2024-11-20 07:27:02.935664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.407 [2024-11-20 07:27:02.935682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.407 [2024-11-20 07:27:02.951440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.407 [2024-11-20 07:27:02.951460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.665 [2024-11-20 07:27:02.962428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.665 [2024-11-20 07:27:02.962447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.665 [2024-11-20 07:27:02.977057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.665 [2024-11-20 07:27:02.977078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.665 [2024-11-20 07:27:02.992739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.666 [2024-11-20 07:27:02.992760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.666 [2024-11-20 07:27:03.007807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.666 [2024-11-20 07:27:03.007826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.666 [2024-11-20 07:27:03.020114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.666 [2024-11-20 07:27:03.020133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.666 [2024-11-20 07:27:03.031756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.666 [2024-11-20 07:27:03.031774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.666 [2024-11-20 07:27:03.045653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.666 [2024-11-20 07:27:03.045672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.666 [2024-11-20 07:27:03.060999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.666 [2024-11-20 07:27:03.061019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.666 [2024-11-20 07:27:03.076321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.666 [2024-11-20 07:27:03.076341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.666 [2024-11-20 07:27:03.091593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.666 [2024-11-20 07:27:03.091613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.666 [2024-11-20 07:27:03.105473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.666 [2024-11-20 07:27:03.105498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.666 [2024-11-20 07:27:03.120862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.666 [2024-11-20 07:27:03.120881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.666 [2024-11-20 07:27:03.135970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.666 [2024-11-20 07:27:03.135989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.666 [2024-11-20 07:27:03.150808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.666 [2024-11-20 07:27:03.150827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.666 [2024-11-20 07:27:03.164658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.666 [2024-11-20 07:27:03.164677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.666 [2024-11-20 07:27:03.175353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.666 [2024-11-20 07:27:03.175372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.666 [2024-11-20 07:27:03.189139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.666 [2024-11-20 07:27:03.189159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.666 [2024-11-20 07:27:03.204060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.666 [2024-11-20 07:27:03.204078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.925 16374.00 IOPS, 127.92 MiB/s [2024-11-20T06:27:03.481Z] [2024-11-20 07:27:03.219250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.925 [2024-11-20 07:27:03.219270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.925 [2024-11-20 07:27:03.230085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.925 [2024-11-20 07:27:03.230104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.925 [2024-11-20 07:27:03.245479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.925 [2024-11-20 07:27:03.245499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.925 [2024-11-20 07:27:03.260318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.925 [2024-11-20 07:27:03.260337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.925 [2024-11-20 07:27:03.275806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.925 [2024-11-20 07:27:03.275825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.925 [2024-11-20 07:27:03.291294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.925 [2024-11-20 07:27:03.291315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.925 [2024-11-20 07:27:03.305252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.925 [2024-11-20 07:27:03.305272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.925 [2024-11-20 07:27:03.320642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.925 [2024-11-20 07:27:03.320662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.925 [2024-11-20 07:27:03.335923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.925 [2024-11-20 07:27:03.335942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.925 [2024-11-20 07:27:03.351613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.925 [2024-11-20 07:27:03.351632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.925 [2024-11-20 07:27:03.363573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.925 [2024-11-20 07:27:03.363591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.925 [2024-11-20 07:27:03.377173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.925 [2024-11-20 07:27:03.377194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.925 [2024-11-20 07:27:03.392283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.925 [2024-11-20 07:27:03.392302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.925 [2024-11-20 07:27:03.407486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.925 [2024-11-20 07:27:03.407507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.925 [2024-11-20 07:27:03.418838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.925 [2024-11-20 07:27:03.418858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.925 [2024-11-20 07:27:03.433272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.925 [2024-11-20 07:27:03.433292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.925 [2024-11-20 07:27:03.448347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.925 [2024-11-20 07:27:03.448367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.925 [2024-11-20 07:27:03.463611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.925 [2024-11-20 07:27:03.463631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:58.925 [2024-11-20 07:27:03.474008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:58.925 [2024-11-20 07:27:03.474030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.184 [2024-11-20 07:27:03.489229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.184 [2024-11-20 07:27:03.489248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.184 [2024-11-20 07:27:03.504422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.184 [2024-11-20 07:27:03.504443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.184 [2024-11-20 07:27:03.515048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.184 [2024-11-20 07:27:03.515067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.184 [2024-11-20 07:27:03.529370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.184 [2024-11-20 07:27:03.529389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.184 [2024-11-20 07:27:03.544371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.184 [2024-11-20 07:27:03.544391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.184 [2024-11-20 07:27:03.559257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.184 [2024-11-20 07:27:03.559276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.184 [2024-11-20 07:27:03.574033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.184 [2024-11-20 07:27:03.574052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.184 [2024-11-20 07:27:03.589023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.184 [2024-11-20 07:27:03.589042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.184 [2024-11-20 07:27:03.604005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.184 [2024-11-20 07:27:03.604024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.184 [2024-11-20 07:27:03.619884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.184 [2024-11-20 07:27:03.619903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.184 [2024-11-20 07:27:03.631223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.184 [2024-11-20 07:27:03.631243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.184 [2024-11-20 07:27:03.645000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.185 [2024-11-20 07:27:03.645020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.185 [2024-11-20 07:27:03.660270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.185 [2024-11-20 07:27:03.660289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.185 [2024-11-20 07:27:03.675160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.185 [2024-11-20 07:27:03.675179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.185 [2024-11-20 07:27:03.689264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.185 [2024-11-20 07:27:03.689284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.185 [2024-11-20 07:27:03.704141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.185 [2024-11-20 07:27:03.704159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.185 [2024-11-20 07:27:03.719300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.185 [2024-11-20 07:27:03.719319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.185 [2024-11-20 07:27:03.732643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.185 [2024-11-20 07:27:03.732662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.443 [2024-11-20 07:27:03.748139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.443 [2024-11-20 07:27:03.748158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.443 [2024-11-20 07:27:03.763288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.443 [2024-11-20 07:27:03.763307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.443 [2024-11-20 07:27:03.774353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.443 [2024-11-20 07:27:03.774371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.443 [2024-11-20 07:27:03.789658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.443 [2024-11-20 07:27:03.789678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.443 [2024-11-20 07:27:03.804710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.443 [2024-11-20 07:27:03.804729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.443 [2024-11-20 07:27:03.820404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.443 [2024-11-20 07:27:03.820424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.443 [2024-11-20 07:27:03.835110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.443 [2024-11-20 07:27:03.835128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.443 [2024-11-20 07:27:03.846827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.443 [2024-11-20 07:27:03.846846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.443 [2024-11-20 07:27:03.860187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.443 [2024-11-20 07:27:03.860205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.443 [2024-11-20 07:27:03.871647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.443 [2024-11-20 07:27:03.871664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.444 [2024-11-20 07:27:03.885254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.444 [2024-11-20 07:27:03.885272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.444 [2024-11-20 07:27:03.900405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.444 [2024-11-20 07:27:03.900429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.444 [2024-11-20 07:27:03.915337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.444 [2024-11-20 07:27:03.915356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.444 [2024-11-20 07:27:03.927073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.444 [2024-11-20 07:27:03.927091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.444 [2024-11-20 07:27:03.941063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.444 [2024-11-20 07:27:03.941081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.444 [2024-11-20 07:27:03.956064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.444 [2024-11-20 07:27:03.956082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.444 [2024-11-20 07:27:03.971213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.444 [2024-11-20 07:27:03.971231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.444 [2024-11-20 07:27:03.985015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.444 [2024-11-20 07:27:03.985034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.703 [2024-11-20 07:27:04.000512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.703 [2024-11-20 07:27:04.000531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.703 [2024-11-20 07:27:04.015524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.703 [2024-11-20 07:27:04.015544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.703 [2024-11-20 07:27:04.025810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.703 [2024-11-20 07:27:04.025829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.703 [2024-11-20 07:27:04.040951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.703 [2024-11-20 07:27:04.040970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.703 [2024-11-20 07:27:04.055943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.703 [2024-11-20 07:27:04.055967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.703 [2024-11-20 07:27:04.071518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.703 [2024-11-20 07:27:04.071536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.703 [2024-11-20 07:27:04.083637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.703 [2024-11-20 07:27:04.083654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.703 [2024-11-20 07:27:04.097207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.703 [2024-11-20 07:27:04.097225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.703 [2024-11-20 07:27:04.112304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.703 [2024-11-20 07:27:04.112322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.703 [2024-11-20 07:27:04.123028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.703 [2024-11-20 07:27:04.123046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.703 [2024-11-20 07:27:04.137144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.703 [2024-11-20 07:27:04.137162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.703 [2024-11-20 07:27:04.152255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.703 [2024-11-20 07:27:04.152273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.703 [2024-11-20 07:27:04.167272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.703 [2024-11-20 07:27:04.167296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.703 [2024-11-20 07:27:04.181630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.703 [2024-11-20 07:27:04.181649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.703 [2024-11-20 07:27:04.196313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.703 [2024-11-20 07:27:04.196331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.703 [2024-11-20 07:27:04.211395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.703 [2024-11-20 07:27:04.211414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.703 16363.20 IOPS, 127.84 MiB/s [2024-11-20T06:27:04.259Z] [2024-11-20 07:27:04.219344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.703 [2024-11-20 07:27:04.219361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.703 00:30:59.703 Latency(us) 00:30:59.703 [2024-11-20T06:27:04.259Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:59.703 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:30:59.703 Nvme1n1 : 5.01 16366.22 127.86 0.00 0.00 7814.24 2023.07 13335.15 00:30:59.703 [2024-11-20T06:27:04.259Z] =================================================================================================================== 00:30:59.703 [2024-11-20T06:27:04.259Z] Total : 16366.22 127.86 0.00 0.00 7814.24 2023.07 13335.15 00:30:59.703 [2024-11-20 07:27:04.231345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.703 [2024-11-20 07:27:04.231361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.703 [2024-11-20 07:27:04.243349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.703 [2024-11-20 07:27:04.243362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.962 [2024-11-20 07:27:04.255358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.963 [2024-11-20 07:27:04.255384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.963 [2024-11-20 07:27:04.267348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.963 [2024-11-20 07:27:04.267364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.963 [2024-11-20 07:27:04.279345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.963 [2024-11-20 07:27:04.279359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.963 [2024-11-20 07:27:04.291342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.963 [2024-11-20 07:27:04.291357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.963 [2024-11-20 07:27:04.303340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.963 [2024-11-20 07:27:04.303354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.963 [2024-11-20 07:27:04.315340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.963 [2024-11-20 07:27:04.315353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.963 [2024-11-20 07:27:04.327337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.963 [2024-11-20 07:27:04.327348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.963 [2024-11-20 07:27:04.339335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.963 [2024-11-20 07:27:04.339344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.963 [2024-11-20 07:27:04.351343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.963 [2024-11-20 07:27:04.351354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.963 [2024-11-20 07:27:04.363336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.963 [2024-11-20 07:27:04.363351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.963 [2024-11-20 07:27:04.375340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:30:59.963 [2024-11-20 07:27:04.375349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:59.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1408718) - No such process 00:30:59.963 07:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1408718 00:30:59.963 07:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:59.963 07:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.963 07:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:59.963 07:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.963 07:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:59.963 07:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.963 07:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:59.963 delay0 00:30:59.963 07:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.963 07:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:30:59.963 07:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.963 07:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:59.963 07:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.963 07:27:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:31:00.277 [2024-11-20 07:27:04.525931] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:06.906 Initializing NVMe Controllers 00:31:06.906 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:06.906 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:06.906 Initialization complete. Launching workers. 00:31:06.906 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 297, failed: 12257 00:31:06.906 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 12488, failed to submit 66 00:31:06.906 success 12360, unsuccessful 128, failed 0 00:31:06.906 07:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:31:06.906 07:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:31:06.906 07:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:06.906 07:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:31:07.165 07:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:07.165 07:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:31:07.165 07:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:07.165 07:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:07.165 rmmod nvme_tcp 00:31:07.165 rmmod nvme_fabrics 00:31:07.165 rmmod nvme_keyring 00:31:07.165 07:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:07.165 07:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:31:07.165 07:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:31:07.165 07:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1407084 ']' 00:31:07.165 07:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1407084 00:31:07.165 07:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 1407084 ']' 00:31:07.165 07:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 1407084 00:31:07.165 07:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:31:07.165 07:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:07.165 07:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1407084 00:31:07.165 07:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:07.165 07:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:07.165 07:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1407084' 00:31:07.165 killing process with pid 1407084 00:31:07.165 07:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 1407084 00:31:07.165 07:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 1407084 00:31:07.424 07:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:07.424 07:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:07.425 07:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:07.425 07:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:31:07.425 07:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:31:07.425 07:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:07.425 07:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:31:07.425 07:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:07.425 07:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:07.425 07:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:07.425 07:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:07.425 07:27:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:09.332 07:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:09.332 00:31:09.332 real 0m32.125s 00:31:09.332 user 0m41.616s 00:31:09.332 sys 0m12.768s 00:31:09.332 07:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:09.332 07:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:09.332 ************************************ 00:31:09.332 END TEST nvmf_zcopy 00:31:09.332 ************************************ 00:31:09.332 07:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:09.332 07:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:09.332 07:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:09.332 07:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:09.332 ************************************ 00:31:09.332 START TEST nvmf_nmic 00:31:09.332 ************************************ 00:31:09.332 07:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:09.592 * Looking for test storage... 00:31:09.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:09.592 07:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:09.592 07:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:31:09.592 07:27:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:09.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.592 --rc genhtml_branch_coverage=1 00:31:09.592 --rc genhtml_function_coverage=1 00:31:09.592 --rc genhtml_legend=1 00:31:09.592 --rc geninfo_all_blocks=1 00:31:09.592 --rc geninfo_unexecuted_blocks=1 00:31:09.592 00:31:09.592 ' 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:09.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.592 --rc genhtml_branch_coverage=1 00:31:09.592 --rc genhtml_function_coverage=1 00:31:09.592 --rc genhtml_legend=1 00:31:09.592 --rc geninfo_all_blocks=1 00:31:09.592 --rc geninfo_unexecuted_blocks=1 00:31:09.592 00:31:09.592 ' 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:09.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.592 --rc genhtml_branch_coverage=1 00:31:09.592 --rc genhtml_function_coverage=1 00:31:09.592 --rc genhtml_legend=1 00:31:09.592 --rc geninfo_all_blocks=1 00:31:09.592 --rc geninfo_unexecuted_blocks=1 00:31:09.592 00:31:09.592 ' 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:09.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:09.592 --rc genhtml_branch_coverage=1 00:31:09.592 --rc genhtml_function_coverage=1 00:31:09.592 --rc genhtml_legend=1 00:31:09.592 --rc geninfo_all_blocks=1 00:31:09.592 --rc geninfo_unexecuted_blocks=1 00:31:09.592 00:31:09.592 ' 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:09.592 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:09.593 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:09.593 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:09.593 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:09.593 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:09.593 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:31:09.593 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:09.593 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:09.593 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:09.593 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.593 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.593 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.593 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:31:09.593 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.593 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:31:09.593 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:09.593 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:09.593 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:09.593 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:09.593 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:09.593 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:09.593 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:09.593 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:09.593 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:09.593 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:09.593 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:09.593 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:09.593 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:31:09.593 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:09.593 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:09.593 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:09.593 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:09.593 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:09.593 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:09.593 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:09.593 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:09.593 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:09.593 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:09.593 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:31:09.593 07:27:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:16.167 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:16.167 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:16.167 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:16.168 Found net devices under 0000:86:00.0: cvl_0_0 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:16.168 Found net devices under 0000:86:00.1: cvl_0_1 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:16.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:16.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.338 ms 00:31:16.168 00:31:16.168 --- 10.0.0.2 ping statistics --- 00:31:16.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:16.168 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:16.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:16.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:31:16.168 00:31:16.168 --- 10.0.0.1 ping statistics --- 00:31:16.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:16.168 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:16.168 07:27:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:16.168 07:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:31:16.168 07:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:16.168 07:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:16.168 07:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:16.168 07:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1414822 00:31:16.168 07:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:16.168 07:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1414822 00:31:16.168 07:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 1414822 ']' 00:31:16.168 07:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:16.168 07:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:16.168 07:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:16.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:16.168 07:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:16.168 07:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:16.168 [2024-11-20 07:27:20.066107] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:16.168 [2024-11-20 07:27:20.067041] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:31:16.168 [2024-11-20 07:27:20.067077] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:16.168 [2024-11-20 07:27:20.167110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:16.168 [2024-11-20 07:27:20.209767] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:16.168 [2024-11-20 07:27:20.209811] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:16.168 [2024-11-20 07:27:20.209818] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:16.168 [2024-11-20 07:27:20.209825] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:16.168 [2024-11-20 07:27:20.209829] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:16.168 [2024-11-20 07:27:20.211444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:16.168 [2024-11-20 07:27:20.211549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:16.168 [2024-11-20 07:27:20.211632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:16.168 [2024-11-20 07:27:20.211633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:16.168 [2024-11-20 07:27:20.281223] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:16.168 [2024-11-20 07:27:20.281958] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:16.168 [2024-11-20 07:27:20.282124] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:16.168 [2024-11-20 07:27:20.282393] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:16.168 [2024-11-20 07:27:20.282467] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:16.428 07:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:16.428 07:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:31:16.428 07:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:16.428 07:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:16.428 07:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:16.428 07:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:16.428 07:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:16.428 07:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.428 07:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:16.428 [2024-11-20 07:27:20.956448] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:16.687 07:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.687 07:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:16.687 07:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.687 07:27:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:16.687 Malloc0 00:31:16.687 07:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.687 07:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:16.687 07:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.687 07:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:16.687 07:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.687 07:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:16.687 07:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.687 07:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:16.687 07:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.687 07:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:16.687 07:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.687 07:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:16.687 [2024-11-20 07:27:21.040663] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:16.687 07:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.687 07:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:31:16.687 test case1: single bdev can't be used in multiple subsystems 00:31:16.687 07:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:31:16.687 07:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.687 07:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:16.687 07:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.687 07:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:16.687 07:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.687 07:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:16.687 07:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.687 07:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:31:16.687 07:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:31:16.687 07:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.687 07:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:16.687 [2024-11-20 07:27:21.072102] bdev.c:8321:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:31:16.687 [2024-11-20 07:27:21.072122] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:31:16.687 [2024-11-20 07:27:21.072130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:16.687 request: 00:31:16.687 { 00:31:16.687 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:31:16.687 "namespace": { 00:31:16.687 "bdev_name": "Malloc0", 00:31:16.687 "no_auto_visible": false 00:31:16.687 }, 00:31:16.687 "method": "nvmf_subsystem_add_ns", 00:31:16.687 "req_id": 1 00:31:16.687 } 00:31:16.687 Got JSON-RPC error response 00:31:16.687 response: 00:31:16.687 { 00:31:16.687 "code": -32602, 00:31:16.687 "message": "Invalid parameters" 00:31:16.687 } 00:31:16.687 07:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:16.687 07:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:31:16.687 07:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:31:16.687 07:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:31:16.687 Adding namespace failed - expected result. 00:31:16.687 07:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:31:16.687 test case2: host connect to nvmf target in multiple paths 00:31:16.687 07:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:16.687 07:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:16.687 07:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:16.687 [2024-11-20 07:27:21.084195] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:16.687 07:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:16.687 07:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:16.946 07:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:31:17.204 07:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:31:17.204 07:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:31:17.204 07:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:31:17.204 07:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:31:17.204 07:27:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:31:19.105 07:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:31:19.105 07:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:31:19.105 07:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:31:19.105 07:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:31:19.105 07:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:31:19.105 07:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:31:19.105 07:27:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:19.105 [global] 00:31:19.105 thread=1 00:31:19.105 invalidate=1 00:31:19.105 rw=write 00:31:19.105 time_based=1 00:31:19.105 runtime=1 00:31:19.105 ioengine=libaio 00:31:19.105 direct=1 00:31:19.105 bs=4096 00:31:19.105 iodepth=1 00:31:19.105 norandommap=0 00:31:19.105 numjobs=1 00:31:19.105 00:31:19.105 verify_dump=1 00:31:19.105 verify_backlog=512 00:31:19.105 verify_state_save=0 00:31:19.105 do_verify=1 00:31:19.105 verify=crc32c-intel 00:31:19.105 [job0] 00:31:19.105 filename=/dev/nvme0n1 00:31:19.105 Could not set queue depth (nvme0n1) 00:31:19.363 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:19.363 fio-3.35 00:31:19.363 Starting 1 thread 00:31:20.742 00:31:20.742 job0: (groupid=0, jobs=1): err= 0: pid=1415496: Wed Nov 20 07:27:24 2024 00:31:20.742 read: IOPS=21, BW=87.4KiB/s (89.5kB/s)(88.0KiB/1007msec) 00:31:20.742 slat (nsec): min=9056, max=23810, avg=22783.14, stdev=3074.03 00:31:20.742 clat (usec): min=40832, max=41500, avg=40988.27, stdev=130.91 00:31:20.742 lat (usec): min=40855, max=41510, avg=41011.05, stdev=128.22 00:31:20.742 clat percentiles (usec): 00:31:20.742 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:20.742 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:20.742 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:20.742 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:31:20.742 | 99.99th=[41681] 00:31:20.742 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:31:20.742 slat (usec): min=9, max=28187, avg=65.73, stdev=1245.23 00:31:20.742 clat (usec): min=125, max=368, avg=136.63, stdev=15.16 00:31:20.742 lat (usec): min=135, max=28462, avg=202.36, stdev=1251.48 00:31:20.742 clat percentiles (usec): 00:31:20.742 | 1.00th=[ 128], 5.00th=[ 130], 10.00th=[ 131], 20.00th=[ 133], 00:31:20.742 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 135], 60.00th=[ 135], 00:31:20.742 | 70.00th=[ 137], 80.00th=[ 137], 90.00th=[ 141], 95.00th=[ 145], 00:31:20.742 | 99.00th=[ 184], 99.50th=[ 188], 99.90th=[ 367], 99.95th=[ 367], 00:31:20.742 | 99.99th=[ 367] 00:31:20.742 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:31:20.742 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:20.742 lat (usec) : 250=95.51%, 500=0.37% 00:31:20.742 lat (msec) : 50=4.12% 00:31:20.742 cpu : usr=0.60%, sys=0.20%, ctx=536, majf=0, minf=1 00:31:20.742 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:20.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:20.742 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:20.742 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:20.742 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:20.742 00:31:20.742 Run status group 0 (all jobs): 00:31:20.742 READ: bw=87.4KiB/s (89.5kB/s), 87.4KiB/s-87.4KiB/s (89.5kB/s-89.5kB/s), io=88.0KiB (90.1kB), run=1007-1007msec 00:31:20.742 WRITE: bw=2034KiB/s (2083kB/s), 2034KiB/s-2034KiB/s (2083kB/s-2083kB/s), io=2048KiB (2097kB), run=1007-1007msec 00:31:20.742 00:31:20.742 Disk stats (read/write): 00:31:20.742 nvme0n1: ios=45/512, merge=0/0, ticks=1765/68, in_queue=1833, util=98.60% 00:31:20.742 07:27:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:20.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:31:20.742 07:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:20.742 07:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:31:20.742 07:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:31:20.742 07:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:20.742 07:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:31:20.742 07:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:20.742 07:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:31:20.742 07:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:20.742 07:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:31:20.742 07:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:20.742 07:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:31:20.742 07:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:20.742 07:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:31:20.743 07:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:20.743 07:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:20.743 rmmod nvme_tcp 00:31:20.743 rmmod nvme_fabrics 00:31:20.743 rmmod nvme_keyring 00:31:20.743 07:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:20.743 07:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:31:20.743 07:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:31:20.743 07:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1414822 ']' 00:31:20.743 07:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1414822 00:31:20.743 07:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 1414822 ']' 00:31:20.743 07:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 1414822 00:31:20.743 07:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:31:20.743 07:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:20.743 07:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1414822 00:31:21.002 07:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:21.002 07:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:21.002 07:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1414822' 00:31:21.002 killing process with pid 1414822 00:31:21.002 07:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 1414822 00:31:21.002 07:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 1414822 00:31:21.002 07:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:21.002 07:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:21.002 07:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:21.002 07:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:31:21.002 07:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:31:21.002 07:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:21.002 07:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:31:21.002 07:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:21.002 07:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:21.002 07:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:21.002 07:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:21.002 07:27:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:23.540 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:23.540 00:31:23.540 real 0m13.674s 00:31:23.540 user 0m23.973s 00:31:23.540 sys 0m6.075s 00:31:23.540 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:23.540 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:23.540 ************************************ 00:31:23.540 END TEST nvmf_nmic 00:31:23.540 ************************************ 00:31:23.540 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:23.540 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:23.540 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:23.540 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:23.540 ************************************ 00:31:23.540 START TEST nvmf_fio_target 00:31:23.540 ************************************ 00:31:23.540 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:23.540 * Looking for test storage... 00:31:23.540 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:23.540 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:23.540 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:31:23.540 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:23.540 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:23.540 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:23.540 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:23.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.541 --rc genhtml_branch_coverage=1 00:31:23.541 --rc genhtml_function_coverage=1 00:31:23.541 --rc genhtml_legend=1 00:31:23.541 --rc geninfo_all_blocks=1 00:31:23.541 --rc geninfo_unexecuted_blocks=1 00:31:23.541 00:31:23.541 ' 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:23.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.541 --rc genhtml_branch_coverage=1 00:31:23.541 --rc genhtml_function_coverage=1 00:31:23.541 --rc genhtml_legend=1 00:31:23.541 --rc geninfo_all_blocks=1 00:31:23.541 --rc geninfo_unexecuted_blocks=1 00:31:23.541 00:31:23.541 ' 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:23.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.541 --rc genhtml_branch_coverage=1 00:31:23.541 --rc genhtml_function_coverage=1 00:31:23.541 --rc genhtml_legend=1 00:31:23.541 --rc geninfo_all_blocks=1 00:31:23.541 --rc geninfo_unexecuted_blocks=1 00:31:23.541 00:31:23.541 ' 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:23.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.541 --rc genhtml_branch_coverage=1 00:31:23.541 --rc genhtml_function_coverage=1 00:31:23.541 --rc genhtml_legend=1 00:31:23.541 --rc geninfo_all_blocks=1 00:31:23.541 --rc geninfo_unexecuted_blocks=1 00:31:23.541 00:31:23.541 ' 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:23.541 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:23.542 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:23.542 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:23.542 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:31:23.542 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:23.542 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:23.542 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:23.542 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:23.542 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:23.542 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:23.542 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:23.542 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:23.542 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:23.542 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:23.542 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:31:23.542 07:27:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:30.117 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:30.117 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:30.117 Found net devices under 0000:86:00.0: cvl_0_0 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:30.117 Found net devices under 0000:86:00.1: cvl_0_1 00:31:30.117 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:30.118 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:30.118 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.389 ms 00:31:30.118 00:31:30.118 --- 10.0.0.2 ping statistics --- 00:31:30.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:30.118 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:30.118 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:30.118 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:31:30.118 00:31:30.118 --- 10.0.0.1 ping statistics --- 00:31:30.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:30.118 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1419194 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1419194 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 1419194 ']' 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:30.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:30.118 07:27:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:30.118 [2024-11-20 07:27:33.806313] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:30.118 [2024-11-20 07:27:33.807280] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:31:30.118 [2024-11-20 07:27:33.807312] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:30.118 [2024-11-20 07:27:33.887295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:30.118 [2024-11-20 07:27:33.929615] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:30.118 [2024-11-20 07:27:33.929653] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:30.118 [2024-11-20 07:27:33.929660] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:30.118 [2024-11-20 07:27:33.929666] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:30.118 [2024-11-20 07:27:33.929671] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:30.118 [2024-11-20 07:27:33.931123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:30.118 [2024-11-20 07:27:33.931230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:30.118 [2024-11-20 07:27:33.931337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:30.118 [2024-11-20 07:27:33.931338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:30.118 [2024-11-20 07:27:34.000684] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:30.118 [2024-11-20 07:27:34.001189] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:30.118 [2024-11-20 07:27:34.001494] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:30.118 [2024-11-20 07:27:34.001918] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:30.118 [2024-11-20 07:27:34.001971] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:30.118 07:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:30.118 07:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:31:30.118 07:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:30.118 07:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:30.118 07:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:30.119 07:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:30.119 07:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:30.119 [2024-11-20 07:27:34.236031] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:30.119 07:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:30.119 07:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:31:30.119 07:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:30.377 07:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:31:30.378 07:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:30.637 07:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:31:30.637 07:27:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:30.637 07:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:31:30.637 07:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:31:30.897 07:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:31.155 07:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:31:31.155 07:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:31.414 07:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:31:31.414 07:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:31.673 07:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:31:31.673 07:27:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:31:31.673 07:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:31.933 07:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:31.933 07:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:32.191 07:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:32.191 07:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:32.449 07:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:32.449 [2024-11-20 07:27:36.923935] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:32.449 07:27:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:31:32.707 07:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:31:32.965 07:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:33.223 07:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:31:33.223 07:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:31:33.223 07:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:31:33.223 07:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:31:33.223 07:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:31:33.223 07:27:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:31:35.123 07:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:31:35.123 07:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:31:35.123 07:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:31:35.381 07:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:31:35.381 07:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:31:35.381 07:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:31:35.381 07:27:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:35.381 [global] 00:31:35.381 thread=1 00:31:35.381 invalidate=1 00:31:35.381 rw=write 00:31:35.381 time_based=1 00:31:35.381 runtime=1 00:31:35.381 ioengine=libaio 00:31:35.381 direct=1 00:31:35.381 bs=4096 00:31:35.381 iodepth=1 00:31:35.382 norandommap=0 00:31:35.382 numjobs=1 00:31:35.382 00:31:35.382 verify_dump=1 00:31:35.382 verify_backlog=512 00:31:35.382 verify_state_save=0 00:31:35.382 do_verify=1 00:31:35.382 verify=crc32c-intel 00:31:35.382 [job0] 00:31:35.382 filename=/dev/nvme0n1 00:31:35.382 [job1] 00:31:35.382 filename=/dev/nvme0n2 00:31:35.382 [job2] 00:31:35.382 filename=/dev/nvme0n3 00:31:35.382 [job3] 00:31:35.382 filename=/dev/nvme0n4 00:31:35.382 Could not set queue depth (nvme0n1) 00:31:35.382 Could not set queue depth (nvme0n2) 00:31:35.382 Could not set queue depth (nvme0n3) 00:31:35.382 Could not set queue depth (nvme0n4) 00:31:35.640 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:35.640 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:35.640 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:35.640 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:35.640 fio-3.35 00:31:35.640 Starting 4 threads 00:31:37.027 00:31:37.027 job0: (groupid=0, jobs=1): err= 0: pid=1420437: Wed Nov 20 07:27:41 2024 00:31:37.027 read: IOPS=21, BW=85.8KiB/s (87.8kB/s)(88.0KiB/1026msec) 00:31:37.027 slat (nsec): min=9877, max=23687, avg=22644.91, stdev=2858.84 00:31:37.027 clat (usec): min=40635, max=41024, avg=40952.99, stdev=74.75 00:31:37.027 lat (usec): min=40645, max=41047, avg=40975.64, stdev=77.46 00:31:37.027 clat percentiles (usec): 00:31:37.027 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:37.027 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:37.027 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:37.027 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:37.027 | 99.99th=[41157] 00:31:37.027 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:31:37.027 slat (usec): min=8, max=18939, avg=47.68, stdev=836.54 00:31:37.027 clat (usec): min=135, max=357, avg=192.02, stdev=24.91 00:31:37.027 lat (usec): min=143, max=19228, avg=239.70, stdev=841.20 00:31:37.027 clat percentiles (usec): 00:31:37.027 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 161], 20.00th=[ 172], 00:31:37.027 | 30.00th=[ 180], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 198], 00:31:37.027 | 70.00th=[ 202], 80.00th=[ 208], 90.00th=[ 217], 95.00th=[ 241], 00:31:37.027 | 99.00th=[ 251], 99.50th=[ 289], 99.90th=[ 359], 99.95th=[ 359], 00:31:37.027 | 99.99th=[ 359] 00:31:37.027 bw ( KiB/s): min= 4096, max= 4096, per=20.52%, avg=4096.00, stdev= 0.00, samples=1 00:31:37.027 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:37.027 lat (usec) : 250=94.76%, 500=1.12% 00:31:37.027 lat (msec) : 50=4.12% 00:31:37.027 cpu : usr=0.49%, sys=0.29%, ctx=537, majf=0, minf=1 00:31:37.027 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:37.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.027 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.027 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:37.027 job1: (groupid=0, jobs=1): err= 0: pid=1420451: Wed Nov 20 07:27:41 2024 00:31:37.027 read: IOPS=21, BW=86.4KiB/s (88.5kB/s)(88.0KiB/1018msec) 00:31:37.027 slat (nsec): min=10436, max=24665, avg=21372.86, stdev=2562.41 00:31:37.027 clat (usec): min=40812, max=41134, avg=40966.17, stdev=55.21 00:31:37.027 lat (usec): min=40823, max=41156, avg=40987.54, stdev=56.71 00:31:37.027 clat percentiles (usec): 00:31:37.027 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:31:37.027 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:37.027 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:37.027 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:37.027 | 99.99th=[41157] 00:31:37.027 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:31:37.027 slat (nsec): min=4064, max=34671, avg=11067.46, stdev=2688.40 00:31:37.027 clat (usec): min=149, max=3434, avg=211.91, stdev=145.51 00:31:37.027 lat (usec): min=160, max=3446, avg=222.97, stdev=145.39 00:31:37.027 clat percentiles (usec): 00:31:37.027 | 1.00th=[ 155], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 182], 00:31:37.027 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 200], 60.00th=[ 206], 00:31:37.027 | 70.00th=[ 215], 80.00th=[ 243], 90.00th=[ 247], 95.00th=[ 249], 00:31:37.027 | 99.00th=[ 281], 99.50th=[ 318], 99.90th=[ 3425], 99.95th=[ 3425], 00:31:37.027 | 99.99th=[ 3425] 00:31:37.027 bw ( KiB/s): min= 4096, max= 4096, per=20.52%, avg=4096.00, stdev= 0.00, samples=1 00:31:37.027 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:37.027 lat (usec) : 250=92.70%, 500=3.00% 00:31:37.027 lat (msec) : 4=0.19%, 50=4.12% 00:31:37.027 cpu : usr=0.59%, sys=0.59%, ctx=534, majf=0, minf=1 00:31:37.027 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:37.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.027 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.027 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:37.027 job2: (groupid=0, jobs=1): err= 0: pid=1420467: Wed Nov 20 07:27:41 2024 00:31:37.027 read: IOPS=2176, BW=8707KiB/s (8916kB/s)(8716KiB/1001msec) 00:31:37.027 slat (nsec): min=7103, max=43432, avg=8387.18, stdev=1809.64 00:31:37.027 clat (usec): min=183, max=394, avg=234.30, stdev=26.73 00:31:37.027 lat (usec): min=197, max=406, avg=242.68, stdev=27.08 00:31:37.027 clat percentiles (usec): 00:31:37.027 | 1.00th=[ 202], 5.00th=[ 206], 10.00th=[ 208], 20.00th=[ 210], 00:31:37.027 | 30.00th=[ 215], 40.00th=[ 225], 50.00th=[ 235], 60.00th=[ 239], 00:31:37.027 | 70.00th=[ 243], 80.00th=[ 247], 90.00th=[ 255], 95.00th=[ 302], 00:31:37.027 | 99.00th=[ 330], 99.50th=[ 359], 99.90th=[ 379], 99.95th=[ 396], 00:31:37.027 | 99.99th=[ 396] 00:31:37.027 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:31:37.027 slat (nsec): min=10203, max=50542, avg=11652.32, stdev=1992.26 00:31:37.027 clat (usec): min=132, max=690, avg=166.99, stdev=38.98 00:31:37.027 lat (usec): min=142, max=702, avg=178.64, stdev=39.60 00:31:37.027 clat percentiles (usec): 00:31:37.027 | 1.00th=[ 135], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 141], 00:31:37.027 | 30.00th=[ 143], 40.00th=[ 153], 50.00th=[ 161], 60.00th=[ 163], 00:31:37.027 | 70.00th=[ 167], 80.00th=[ 176], 90.00th=[ 217], 95.00th=[ 243], 00:31:37.027 | 99.00th=[ 289], 99.50th=[ 355], 99.90th=[ 586], 99.95th=[ 685], 00:31:37.027 | 99.99th=[ 693] 00:31:37.027 bw ( KiB/s): min=10984, max=10984, per=55.03%, avg=10984.00, stdev= 0.00, samples=1 00:31:37.027 iops : min= 2746, max= 2746, avg=2746.00, stdev= 0.00, samples=1 00:31:37.027 lat (usec) : 250=90.78%, 500=9.16%, 750=0.06% 00:31:37.027 cpu : usr=4.30%, sys=7.20%, ctx=4739, majf=0, minf=1 00:31:37.027 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:37.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.027 issued rwts: total=2179,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.027 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:37.027 job3: (groupid=0, jobs=1): err= 0: pid=1420475: Wed Nov 20 07:27:41 2024 00:31:37.027 read: IOPS=1114, BW=4460KiB/s (4567kB/s)(4464KiB/1001msec) 00:31:37.027 slat (nsec): min=8849, max=42958, avg=10427.76, stdev=2131.78 00:31:37.027 clat (usec): min=199, max=41017, avg=568.60, stdev=3644.55 00:31:37.027 lat (usec): min=211, max=41027, avg=579.02, stdev=3645.54 00:31:37.027 clat percentiles (usec): 00:31:37.027 | 1.00th=[ 206], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 229], 00:31:37.027 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 241], 60.00th=[ 243], 00:31:37.027 | 70.00th=[ 245], 80.00th=[ 247], 90.00th=[ 251], 95.00th=[ 253], 00:31:37.027 | 99.00th=[ 482], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:37.027 | 99.99th=[41157] 00:31:37.027 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:31:37.027 slat (usec): min=10, max=40569, avg=52.13, stdev=1140.05 00:31:37.027 clat (usec): min=139, max=628, avg=172.68, stdev=27.92 00:31:37.027 lat (usec): min=150, max=40912, avg=224.81, stdev=1146.80 00:31:37.027 clat percentiles (usec): 00:31:37.027 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 153], 20.00th=[ 157], 00:31:37.027 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 169], 00:31:37.027 | 70.00th=[ 174], 80.00th=[ 184], 90.00th=[ 206], 95.00th=[ 219], 00:31:37.027 | 99.00th=[ 253], 99.50th=[ 269], 99.90th=[ 545], 99.95th=[ 627], 00:31:37.027 | 99.99th=[ 627] 00:31:37.027 bw ( KiB/s): min= 4096, max= 4096, per=20.52%, avg=4096.00, stdev= 0.00, samples=1 00:31:37.028 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:37.028 lat (usec) : 250=94.87%, 500=4.64%, 750=0.11% 00:31:37.028 lat (msec) : 2=0.04%, 50=0.34% 00:31:37.028 cpu : usr=1.80%, sys=5.40%, ctx=2655, majf=0, minf=1 00:31:37.028 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:37.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.028 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.028 issued rwts: total=1116,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.028 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:37.028 00:31:37.028 Run status group 0 (all jobs): 00:31:37.028 READ: bw=12.7MiB/s (13.3MB/s), 85.8KiB/s-8707KiB/s (87.8kB/s-8916kB/s), io=13.0MiB (13.7MB), run=1001-1026msec 00:31:37.028 WRITE: bw=19.5MiB/s (20.4MB/s), 1996KiB/s-9.99MiB/s (2044kB/s-10.5MB/s), io=20.0MiB (21.0MB), run=1001-1026msec 00:31:37.028 00:31:37.028 Disk stats (read/write): 00:31:37.028 nvme0n1: ios=68/512, merge=0/0, ticks=1013/94, in_queue=1107, util=91.58% 00:31:37.028 nvme0n2: ios=66/512, merge=0/0, ticks=763/104, in_queue=867, util=92.07% 00:31:37.028 nvme0n3: ios=1996/2048, merge=0/0, ticks=486/326, in_queue=812, util=90.19% 00:31:37.028 nvme0n4: ios=904/1024, merge=0/0, ticks=1519/168, in_queue=1687, util=100.00% 00:31:37.028 07:27:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:31:37.028 [global] 00:31:37.028 thread=1 00:31:37.028 invalidate=1 00:31:37.028 rw=randwrite 00:31:37.028 time_based=1 00:31:37.028 runtime=1 00:31:37.028 ioengine=libaio 00:31:37.028 direct=1 00:31:37.028 bs=4096 00:31:37.028 iodepth=1 00:31:37.028 norandommap=0 00:31:37.028 numjobs=1 00:31:37.028 00:31:37.028 verify_dump=1 00:31:37.028 verify_backlog=512 00:31:37.028 verify_state_save=0 00:31:37.028 do_verify=1 00:31:37.028 verify=crc32c-intel 00:31:37.028 [job0] 00:31:37.028 filename=/dev/nvme0n1 00:31:37.028 [job1] 00:31:37.028 filename=/dev/nvme0n2 00:31:37.028 [job2] 00:31:37.028 filename=/dev/nvme0n3 00:31:37.028 [job3] 00:31:37.028 filename=/dev/nvme0n4 00:31:37.028 Could not set queue depth (nvme0n1) 00:31:37.028 Could not set queue depth (nvme0n2) 00:31:37.028 Could not set queue depth (nvme0n3) 00:31:37.028 Could not set queue depth (nvme0n4) 00:31:37.285 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:37.285 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:37.285 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:37.285 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:37.285 fio-3.35 00:31:37.285 Starting 4 threads 00:31:38.655 00:31:38.655 job0: (groupid=0, jobs=1): err= 0: pid=1420864: Wed Nov 20 07:27:42 2024 00:31:38.655 read: IOPS=21, BW=87.0KiB/s (89.1kB/s)(88.0KiB/1011msec) 00:31:38.655 slat (nsec): min=9749, max=72756, avg=23446.27, stdev=11521.29 00:31:38.655 clat (usec): min=40813, max=41035, avg=40959.70, stdev=49.07 00:31:38.655 lat (usec): min=40886, max=41058, avg=40983.15, stdev=43.44 00:31:38.655 clat percentiles (usec): 00:31:38.655 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:31:38.655 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:38.655 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:38.655 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:38.655 | 99.99th=[41157] 00:31:38.655 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:31:38.655 slat (nsec): min=8760, max=39567, avg=9872.19, stdev=2060.27 00:31:38.655 clat (usec): min=151, max=361, avg=200.93, stdev=17.95 00:31:38.655 lat (usec): min=160, max=371, avg=210.80, stdev=18.37 00:31:38.655 clat percentiles (usec): 00:31:38.655 | 1.00th=[ 161], 5.00th=[ 174], 10.00th=[ 180], 20.00th=[ 188], 00:31:38.655 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 206], 00:31:38.655 | 70.00th=[ 208], 80.00th=[ 212], 90.00th=[ 219], 95.00th=[ 225], 00:31:38.655 | 99.00th=[ 243], 99.50th=[ 262], 99.90th=[ 363], 99.95th=[ 363], 00:31:38.655 | 99.99th=[ 363] 00:31:38.655 bw ( KiB/s): min= 4096, max= 4096, per=50.55%, avg=4096.00, stdev= 0.00, samples=1 00:31:38.655 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:38.655 lat (usec) : 250=95.13%, 500=0.75% 00:31:38.655 lat (msec) : 50=4.12% 00:31:38.655 cpu : usr=0.20%, sys=0.50%, ctx=534, majf=0, minf=2 00:31:38.655 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:38.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.655 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.655 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:38.655 job1: (groupid=0, jobs=1): err= 0: pid=1420878: Wed Nov 20 07:27:42 2024 00:31:38.655 read: IOPS=22, BW=91.0KiB/s (93.2kB/s)(92.0KiB/1011msec) 00:31:38.655 slat (nsec): min=9226, max=25490, avg=21027.39, stdev=4145.07 00:31:38.655 clat (usec): min=305, max=41013, avg=39167.11, stdev=8472.06 00:31:38.655 lat (usec): min=320, max=41031, avg=39188.14, stdev=8473.52 00:31:38.655 clat percentiles (usec): 00:31:38.655 | 1.00th=[ 306], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:31:38.655 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:38.655 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:38.655 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:38.655 | 99.99th=[41157] 00:31:38.655 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:31:38.655 slat (nsec): min=9620, max=38673, avg=11234.80, stdev=2207.30 00:31:38.655 clat (usec): min=155, max=295, avg=199.30, stdev=17.38 00:31:38.655 lat (usec): min=166, max=334, avg=210.53, stdev=17.81 00:31:38.655 clat percentiles (usec): 00:31:38.655 | 1.00th=[ 161], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 186], 00:31:38.655 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 204], 00:31:38.655 | 70.00th=[ 208], 80.00th=[ 212], 90.00th=[ 219], 95.00th=[ 225], 00:31:38.655 | 99.00th=[ 258], 99.50th=[ 269], 99.90th=[ 297], 99.95th=[ 297], 00:31:38.655 | 99.99th=[ 297] 00:31:38.655 bw ( KiB/s): min= 4096, max= 4096, per=50.55%, avg=4096.00, stdev= 0.00, samples=1 00:31:38.655 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:38.655 lat (usec) : 250=94.39%, 500=1.50% 00:31:38.655 lat (msec) : 50=4.11% 00:31:38.655 cpu : usr=0.20%, sys=0.79%, ctx=536, majf=0, minf=1 00:31:38.655 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:38.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.655 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.655 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:38.655 job2: (groupid=0, jobs=1): err= 0: pid=1420898: Wed Nov 20 07:27:42 2024 00:31:38.655 read: IOPS=436, BW=1746KiB/s (1788kB/s)(1748KiB/1001msec) 00:31:38.655 slat (nsec): min=6863, max=39737, avg=8997.06, stdev=3997.15 00:31:38.655 clat (usec): min=212, max=41264, avg=2047.25, stdev=8311.54 00:31:38.655 lat (usec): min=220, max=41273, avg=2056.25, stdev=8314.48 00:31:38.655 clat percentiles (usec): 00:31:38.655 | 1.00th=[ 260], 5.00th=[ 265], 10.00th=[ 265], 20.00th=[ 269], 00:31:38.655 | 30.00th=[ 269], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 273], 00:31:38.655 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 506], 00:31:38.655 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:38.655 | 99.99th=[41157] 00:31:38.655 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:31:38.655 slat (nsec): min=10022, max=48452, avg=13781.67, stdev=4541.92 00:31:38.655 clat (usec): min=153, max=381, avg=178.69, stdev=20.28 00:31:38.655 lat (usec): min=169, max=407, avg=192.47, stdev=21.84 00:31:38.655 clat percentiles (usec): 00:31:38.655 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 167], 00:31:38.655 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 178], 00:31:38.655 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 196], 95.00th=[ 210], 00:31:38.655 | 99.00th=[ 243], 99.50th=[ 318], 99.90th=[ 383], 99.95th=[ 383], 00:31:38.655 | 99.99th=[ 383] 00:31:38.655 bw ( KiB/s): min= 4096, max= 4096, per=50.55%, avg=4096.00, stdev= 0.00, samples=1 00:31:38.655 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:38.655 lat (usec) : 250=53.64%, 500=44.05%, 750=0.32% 00:31:38.655 lat (msec) : 50=2.00% 00:31:38.655 cpu : usr=0.70%, sys=0.80%, ctx=950, majf=0, minf=1 00:31:38.655 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:38.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.655 issued rwts: total=437,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.655 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:38.655 job3: (groupid=0, jobs=1): err= 0: pid=1420904: Wed Nov 20 07:27:42 2024 00:31:38.655 read: IOPS=28, BW=116KiB/s (118kB/s)(116KiB/1004msec) 00:31:38.655 slat (nsec): min=7590, max=26388, avg=20037.31, stdev=6173.31 00:31:38.655 clat (usec): min=222, max=41307, avg=31161.67, stdev=17714.33 00:31:38.655 lat (usec): min=230, max=41317, avg=31181.71, stdev=17718.17 00:31:38.655 clat percentiles (usec): 00:31:38.655 | 1.00th=[ 223], 5.00th=[ 243], 10.00th=[ 247], 20.00th=[ 262], 00:31:38.655 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:38.655 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:38.655 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:38.655 | 99.99th=[41157] 00:31:38.655 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:31:38.655 slat (nsec): min=10738, max=44111, avg=11901.71, stdev=2251.22 00:31:38.655 clat (usec): min=157, max=262, avg=178.54, stdev=11.93 00:31:38.655 lat (usec): min=173, max=275, avg=190.44, stdev=12.31 00:31:38.655 clat percentiles (usec): 00:31:38.655 | 1.00th=[ 163], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 172], 00:31:38.655 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 178], 00:31:38.655 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 192], 95.00th=[ 198], 00:31:38.655 | 99.00th=[ 221], 99.50th=[ 249], 99.90th=[ 265], 99.95th=[ 265], 00:31:38.655 | 99.99th=[ 265] 00:31:38.655 bw ( KiB/s): min= 4096, max= 4096, per=50.55%, avg=4096.00, stdev= 0.00, samples=1 00:31:38.655 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:38.656 lat (usec) : 250=95.01%, 500=0.74%, 750=0.18% 00:31:38.656 lat (msec) : 50=4.07% 00:31:38.656 cpu : usr=0.20%, sys=1.20%, ctx=542, majf=0, minf=1 00:31:38.656 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:38.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:38.656 issued rwts: total=29,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:38.656 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:38.656 00:31:38.656 Run status group 0 (all jobs): 00:31:38.656 READ: bw=2022KiB/s (2070kB/s), 87.0KiB/s-1746KiB/s (89.1kB/s-1788kB/s), io=2044KiB (2093kB), run=1001-1011msec 00:31:38.656 WRITE: bw=8103KiB/s (8297kB/s), 2026KiB/s-2046KiB/s (2074kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1011msec 00:31:38.656 00:31:38.656 Disk stats (read/write): 00:31:38.656 nvme0n1: ios=67/512, merge=0/0, ticks=754/99, in_queue=853, util=86.27% 00:31:38.656 nvme0n2: ios=67/512, merge=0/0, ticks=1611/101, in_queue=1712, util=89.11% 00:31:38.656 nvme0n3: ios=76/512, merge=0/0, ticks=936/86, in_queue=1022, util=93.20% 00:31:38.656 nvme0n4: ios=48/512, merge=0/0, ticks=1643/85, in_queue=1728, util=94.19% 00:31:38.656 07:27:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:31:38.656 [global] 00:31:38.656 thread=1 00:31:38.656 invalidate=1 00:31:38.656 rw=write 00:31:38.656 time_based=1 00:31:38.656 runtime=1 00:31:38.656 ioengine=libaio 00:31:38.656 direct=1 00:31:38.656 bs=4096 00:31:38.656 iodepth=128 00:31:38.656 norandommap=0 00:31:38.656 numjobs=1 00:31:38.656 00:31:38.656 verify_dump=1 00:31:38.656 verify_backlog=512 00:31:38.656 verify_state_save=0 00:31:38.656 do_verify=1 00:31:38.656 verify=crc32c-intel 00:31:38.656 [job0] 00:31:38.656 filename=/dev/nvme0n1 00:31:38.656 [job1] 00:31:38.656 filename=/dev/nvme0n2 00:31:38.656 [job2] 00:31:38.656 filename=/dev/nvme0n3 00:31:38.656 [job3] 00:31:38.656 filename=/dev/nvme0n4 00:31:38.656 Could not set queue depth (nvme0n1) 00:31:38.656 Could not set queue depth (nvme0n2) 00:31:38.656 Could not set queue depth (nvme0n3) 00:31:38.656 Could not set queue depth (nvme0n4) 00:31:38.656 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:38.656 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:38.656 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:38.656 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:38.656 fio-3.35 00:31:38.656 Starting 4 threads 00:31:40.028 00:31:40.028 job0: (groupid=0, jobs=1): err= 0: pid=1421276: Wed Nov 20 07:27:44 2024 00:31:40.028 read: IOPS=4536, BW=17.7MiB/s (18.6MB/s)(17.8MiB/1003msec) 00:31:40.028 slat (nsec): min=1172, max=13639k, avg=85990.75, stdev=632945.82 00:31:40.028 clat (usec): min=1952, max=36193, avg=11076.59, stdev=3986.66 00:31:40.028 lat (usec): min=3508, max=36217, avg=11162.59, stdev=4034.63 00:31:40.028 clat percentiles (usec): 00:31:40.028 | 1.00th=[ 5080], 5.00th=[ 7177], 10.00th=[ 7898], 20.00th=[ 8586], 00:31:40.028 | 30.00th=[ 9241], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10552], 00:31:40.028 | 70.00th=[11207], 80.00th=[12649], 90.00th=[15926], 95.00th=[19792], 00:31:40.028 | 99.00th=[25822], 99.50th=[27132], 99.90th=[31065], 99.95th=[31065], 00:31:40.028 | 99.99th=[36439] 00:31:40.028 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:31:40.028 slat (nsec): min=1912, max=62407k, avg=121240.10, stdev=1480424.39 00:31:40.028 clat (usec): min=219, max=143703, avg=15184.36, stdev=17036.49 00:31:40.028 lat (usec): min=232, max=143712, avg=15305.60, stdev=17159.76 00:31:40.028 clat percentiles (msec): 00:31:40.028 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 8], 20.00th=[ 9], 00:31:40.028 | 30.00th=[ 10], 40.00th=[ 10], 50.00th=[ 10], 60.00th=[ 10], 00:31:40.028 | 70.00th=[ 12], 80.00th=[ 17], 90.00th=[ 33], 95.00th=[ 50], 00:31:40.028 | 99.00th=[ 113], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:31:40.028 | 99.99th=[ 144] 00:31:40.028 bw ( KiB/s): min=12320, max=24544, per=28.68%, avg=18432.00, stdev=8643.67, samples=2 00:31:40.028 iops : min= 3080, max= 6136, avg=4608.00, stdev=2160.92, samples=2 00:31:40.028 lat (usec) : 250=0.01%, 750=0.11%, 1000=0.09% 00:31:40.028 lat (msec) : 2=0.09%, 4=0.85%, 10=56.37%, 20=32.10%, 50=8.99% 00:31:40.028 lat (msec) : 100=0.71%, 250=0.69% 00:31:40.028 cpu : usr=3.69%, sys=4.29%, ctx=346, majf=0, minf=1 00:31:40.028 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:31:40.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.028 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:40.028 issued rwts: total=4550,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.028 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:40.028 job1: (groupid=0, jobs=1): err= 0: pid=1421277: Wed Nov 20 07:27:44 2024 00:31:40.028 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.5MiB/1046msec) 00:31:40.028 slat (nsec): min=1496, max=14311k, avg=145805.26, stdev=1009674.02 00:31:40.028 clat (usec): min=3695, max=71841, avg=18278.46, stdev=11902.33 00:31:40.028 lat (usec): min=3701, max=71846, avg=18424.27, stdev=11974.87 00:31:40.028 clat percentiles (usec): 00:31:40.028 | 1.00th=[ 8586], 5.00th=[ 9110], 10.00th=[10159], 20.00th=[11731], 00:31:40.028 | 30.00th=[13173], 40.00th=[14746], 50.00th=[15533], 60.00th=[15926], 00:31:40.028 | 70.00th=[17433], 80.00th=[19268], 90.00th=[26608], 95.00th=[50594], 00:31:40.028 | 99.00th=[68682], 99.50th=[70779], 99.90th=[71828], 99.95th=[71828], 00:31:40.028 | 99.99th=[71828] 00:31:40.028 write: IOPS=3426, BW=13.4MiB/s (14.0MB/s)(14.0MiB/1046msec); 0 zone resets 00:31:40.028 slat (usec): min=2, max=9964, avg=143.63, stdev=776.98 00:31:40.028 clat (usec): min=1500, max=71853, avg=20687.78, stdev=14613.39 00:31:40.028 lat (usec): min=1513, max=71857, avg=20831.41, stdev=14707.30 00:31:40.029 clat percentiles (usec): 00:31:40.029 | 1.00th=[ 3687], 5.00th=[ 7177], 10.00th=[ 8848], 20.00th=[10159], 00:31:40.029 | 30.00th=[10552], 40.00th=[12125], 50.00th=[13435], 60.00th=[16319], 00:31:40.029 | 70.00th=[21890], 80.00th=[38536], 90.00th=[45876], 95.00th=[50594], 00:31:40.029 | 99.00th=[55313], 99.50th=[55837], 99.90th=[67634], 99.95th=[71828], 00:31:40.029 | 99.99th=[71828] 00:31:40.029 bw ( KiB/s): min=12304, max=16344, per=22.29%, avg=14324.00, stdev=2856.71, samples=2 00:31:40.029 iops : min= 3076, max= 4086, avg=3581.00, stdev=714.18, samples=2 00:31:40.029 lat (msec) : 2=0.03%, 4=0.65%, 10=12.48%, 20=60.67%, 50=20.70% 00:31:40.029 lat (msec) : 100=5.47% 00:31:40.029 cpu : usr=2.39%, sys=4.69%, ctx=343, majf=0, minf=2 00:31:40.029 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:31:40.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.029 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:40.029 issued rwts: total=3197,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.029 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:40.029 job2: (groupid=0, jobs=1): err= 0: pid=1421278: Wed Nov 20 07:27:44 2024 00:31:40.029 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:31:40.029 slat (nsec): min=1158, max=25422k, avg=129726.63, stdev=1102385.73 00:31:40.029 clat (usec): min=2616, max=75824, avg=16618.75, stdev=11225.65 00:31:40.029 lat (usec): min=2624, max=75850, avg=16748.47, stdev=11326.75 00:31:40.029 clat percentiles (usec): 00:31:40.029 | 1.00th=[ 4686], 5.00th=[ 6718], 10.00th=[ 8225], 20.00th=[ 9765], 00:31:40.029 | 30.00th=[11338], 40.00th=[12518], 50.00th=[13435], 60.00th=[13829], 00:31:40.029 | 70.00th=[14353], 80.00th=[18744], 90.00th=[31851], 95.00th=[44303], 00:31:40.029 | 99.00th=[57934], 99.50th=[63177], 99.90th=[63177], 99.95th=[69731], 00:31:40.029 | 99.99th=[76022] 00:31:40.029 write: IOPS=3992, BW=15.6MiB/s (16.4MB/s)(15.6MiB/1003msec); 0 zone resets 00:31:40.029 slat (usec): min=2, max=14605, avg=126.67, stdev=874.41 00:31:40.029 clat (usec): min=475, max=54507, avg=16890.77, stdev=10129.74 00:31:40.029 lat (usec): min=2374, max=54511, avg=17017.44, stdev=10208.94 00:31:40.029 clat percentiles (usec): 00:31:40.029 | 1.00th=[ 3097], 5.00th=[ 5932], 10.00th=[ 8848], 20.00th=[10028], 00:31:40.029 | 30.00th=[10945], 40.00th=[12256], 50.00th=[13566], 60.00th=[15533], 00:31:40.029 | 70.00th=[18744], 80.00th=[21890], 90.00th=[29754], 95.00th=[43254], 00:31:40.029 | 99.00th=[51119], 99.50th=[53740], 99.90th=[54264], 99.95th=[54264], 00:31:40.029 | 99.99th=[54264] 00:31:40.029 bw ( KiB/s): min=14624, max=16384, per=24.13%, avg=15504.00, stdev=1244.51, samples=2 00:31:40.029 iops : min= 3656, max= 4096, avg=3876.00, stdev=311.13, samples=2 00:31:40.029 lat (usec) : 500=0.01% 00:31:40.029 lat (msec) : 4=0.87%, 10=19.00%, 20=56.75%, 50=20.44%, 100=2.93% 00:31:40.029 cpu : usr=1.60%, sys=4.99%, ctx=272, majf=0, minf=1 00:31:40.029 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:40.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.029 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:40.029 issued rwts: total=3584,4004,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.029 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:40.029 job3: (groupid=0, jobs=1): err= 0: pid=1421279: Wed Nov 20 07:27:44 2024 00:31:40.029 read: IOPS=4394, BW=17.2MiB/s (18.0MB/s)(17.9MiB/1044msec) 00:31:40.029 slat (nsec): min=1179, max=12972k, avg=109834.19, stdev=681813.19 00:31:40.029 clat (usec): min=4600, max=65902, avg=15936.35, stdev=8930.80 00:31:40.029 lat (usec): min=4618, max=65909, avg=16046.18, stdev=8958.51 00:31:40.029 clat percentiles (usec): 00:31:40.029 | 1.00th=[ 5604], 5.00th=[ 7832], 10.00th=[10552], 20.00th=[11338], 00:31:40.029 | 30.00th=[12256], 40.00th=[12649], 50.00th=[13304], 60.00th=[14091], 00:31:40.029 | 70.00th=[14877], 80.00th=[16909], 90.00th=[26608], 95.00th=[30016], 00:31:40.029 | 99.00th=[58459], 99.50th=[65799], 99.90th=[65799], 99.95th=[65799], 00:31:40.029 | 99.99th=[65799] 00:31:40.029 write: IOPS=4413, BW=17.2MiB/s (18.1MB/s)(18.0MiB/1044msec); 0 zone resets 00:31:40.029 slat (usec): min=2, max=19915, avg=100.13, stdev=578.20 00:31:40.029 clat (usec): min=3299, max=30484, avg=12789.03, stdev=3735.01 00:31:40.029 lat (usec): min=3644, max=30496, avg=12889.16, stdev=3744.85 00:31:40.029 clat percentiles (usec): 00:31:40.029 | 1.00th=[ 5800], 5.00th=[ 7439], 10.00th=[ 9634], 20.00th=[10945], 00:31:40.029 | 30.00th=[11338], 40.00th=[11731], 50.00th=[12518], 60.00th=[12911], 00:31:40.029 | 70.00th=[13698], 80.00th=[14091], 90.00th=[15008], 95.00th=[17695], 00:31:40.029 | 99.00th=[30016], 99.50th=[30278], 99.90th=[30540], 99.95th=[30540], 00:31:40.029 | 99.99th=[30540] 00:31:40.029 bw ( KiB/s): min=16384, max=20480, per=28.68%, avg=18432.00, stdev=2896.31, samples=2 00:31:40.029 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:31:40.029 lat (msec) : 4=0.04%, 10=9.91%, 20=79.69%, 50=9.34%, 100=1.02% 00:31:40.029 cpu : usr=3.36%, sys=4.31%, ctx=459, majf=0, minf=1 00:31:40.029 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:31:40.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.029 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:40.029 issued rwts: total=4588,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.029 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:40.029 00:31:40.029 Run status group 0 (all jobs): 00:31:40.029 READ: bw=59.4MiB/s (62.3MB/s), 11.9MiB/s-17.7MiB/s (12.5MB/s-18.6MB/s), io=62.2MiB (65.2MB), run=1003-1046msec 00:31:40.029 WRITE: bw=62.8MiB/s (65.8MB/s), 13.4MiB/s-17.9MiB/s (14.0MB/s-18.8MB/s), io=65.6MiB (68.8MB), run=1003-1046msec 00:31:40.029 00:31:40.029 Disk stats (read/write): 00:31:40.029 nvme0n1: ios=3562/3584, merge=0/0, ticks=24932/32741, in_queue=57673, util=97.70% 00:31:40.029 nvme0n2: ios=3025/3072, merge=0/0, ticks=48343/55014, in_queue=103357, util=98.77% 00:31:40.029 nvme0n3: ios=2826/3072, merge=0/0, ticks=26824/30115, in_queue=56939, util=98.41% 00:31:40.029 nvme0n4: ios=3872/4096, merge=0/0, ticks=22287/19571, in_queue=41858, util=99.57% 00:31:40.029 07:27:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:31:40.029 [global] 00:31:40.029 thread=1 00:31:40.029 invalidate=1 00:31:40.029 rw=randwrite 00:31:40.029 time_based=1 00:31:40.029 runtime=1 00:31:40.029 ioengine=libaio 00:31:40.029 direct=1 00:31:40.029 bs=4096 00:31:40.029 iodepth=128 00:31:40.029 norandommap=0 00:31:40.029 numjobs=1 00:31:40.029 00:31:40.029 verify_dump=1 00:31:40.029 verify_backlog=512 00:31:40.029 verify_state_save=0 00:31:40.029 do_verify=1 00:31:40.029 verify=crc32c-intel 00:31:40.029 [job0] 00:31:40.029 filename=/dev/nvme0n1 00:31:40.029 [job1] 00:31:40.029 filename=/dev/nvme0n2 00:31:40.029 [job2] 00:31:40.029 filename=/dev/nvme0n3 00:31:40.029 [job3] 00:31:40.029 filename=/dev/nvme0n4 00:31:40.029 Could not set queue depth (nvme0n1) 00:31:40.029 Could not set queue depth (nvme0n2) 00:31:40.029 Could not set queue depth (nvme0n3) 00:31:40.029 Could not set queue depth (nvme0n4) 00:31:40.287 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:40.287 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:40.287 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:40.287 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:40.287 fio-3.35 00:31:40.287 Starting 4 threads 00:31:41.662 00:31:41.662 job0: (groupid=0, jobs=1): err= 0: pid=1421643: Wed Nov 20 07:27:45 2024 00:31:41.662 read: IOPS=3869, BW=15.1MiB/s (15.8MB/s)(15.2MiB/1006msec) 00:31:41.662 slat (nsec): min=1276, max=14920k, avg=97041.98, stdev=681845.02 00:31:41.662 clat (usec): min=4843, max=78924, avg=12887.01, stdev=8728.11 00:31:41.662 lat (usec): min=4851, max=83916, avg=12984.05, stdev=8792.00 00:31:41.662 clat percentiles (usec): 00:31:41.662 | 1.00th=[ 6521], 5.00th=[ 7963], 10.00th=[ 8356], 20.00th=[ 8979], 00:31:41.662 | 30.00th=[ 9372], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10552], 00:31:41.662 | 70.00th=[11469], 80.00th=[13566], 90.00th=[19530], 95.00th=[30540], 00:31:41.662 | 99.00th=[47973], 99.50th=[72877], 99.90th=[79168], 99.95th=[79168], 00:31:41.662 | 99.99th=[79168] 00:31:41.662 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:31:41.662 slat (usec): min=2, max=13613, avg=133.81, stdev=773.99 00:31:41.662 clat (usec): min=880, max=115758, avg=18869.92, stdev=20089.00 00:31:41.662 lat (usec): min=890, max=115767, avg=19003.73, stdev=20207.38 00:31:41.662 clat percentiles (msec): 00:31:41.662 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 8], 20.00th=[ 9], 00:31:41.662 | 30.00th=[ 9], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:31:41.662 | 70.00th=[ 14], 80.00th=[ 29], 90.00th=[ 44], 95.00th=[ 69], 00:31:41.662 | 99.00th=[ 103], 99.50th=[ 107], 99.90th=[ 111], 99.95th=[ 116], 00:31:41.662 | 99.99th=[ 116] 00:31:41.662 bw ( KiB/s): min=12288, max=20480, per=25.38%, avg=16384.00, stdev=5792.62, samples=2 00:31:41.662 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:31:41.662 lat (usec) : 1000=0.08% 00:31:41.662 lat (msec) : 2=0.16%, 4=0.10%, 10=41.12%, 20=42.01%, 50=12.32% 00:31:41.662 lat (msec) : 100=3.63%, 250=0.59% 00:31:41.662 cpu : usr=2.69%, sys=3.88%, ctx=435, majf=0, minf=1 00:31:41.662 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:41.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.662 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:41.662 issued rwts: total=3893,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.662 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:41.662 job1: (groupid=0, jobs=1): err= 0: pid=1421644: Wed Nov 20 07:27:45 2024 00:31:41.662 read: IOPS=3446, BW=13.5MiB/s (14.1MB/s)(13.5MiB/1005msec) 00:31:41.662 slat (nsec): min=1491, max=15714k, avg=116890.71, stdev=784643.32 00:31:41.662 clat (usec): min=2876, max=43726, avg=14578.76, stdev=6672.21 00:31:41.662 lat (usec): min=5797, max=50537, avg=14695.65, stdev=6739.87 00:31:41.662 clat percentiles (usec): 00:31:41.662 | 1.00th=[ 6521], 5.00th=[ 8291], 10.00th=[ 9110], 20.00th=[10159], 00:31:41.662 | 30.00th=[10814], 40.00th=[11207], 50.00th=[12780], 60.00th=[13304], 00:31:41.662 | 70.00th=[14091], 80.00th=[19268], 90.00th=[23987], 95.00th=[28967], 00:31:41.662 | 99.00th=[39584], 99.50th=[40633], 99.90th=[43779], 99.95th=[43779], 00:31:41.662 | 99.99th=[43779] 00:31:41.662 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:31:41.662 slat (usec): min=2, max=23667, avg=158.05, stdev=950.00 00:31:41.662 clat (usec): min=2553, max=55191, avg=21445.37, stdev=13806.40 00:31:41.662 lat (usec): min=2558, max=55200, avg=21603.42, stdev=13902.14 00:31:41.662 clat percentiles (usec): 00:31:41.662 | 1.00th=[ 3458], 5.00th=[ 6718], 10.00th=[ 8455], 20.00th=[ 9896], 00:31:41.662 | 30.00th=[11076], 40.00th=[12256], 50.00th=[13566], 60.00th=[21365], 00:31:41.662 | 70.00th=[30802], 80.00th=[34866], 90.00th=[43779], 95.00th=[47449], 00:31:41.662 | 99.00th=[52167], 99.50th=[53740], 99.90th=[55313], 99.95th=[55313], 00:31:41.662 | 99.99th=[55313] 00:31:41.662 bw ( KiB/s): min=11920, max=16752, per=22.21%, avg=14336.00, stdev=3416.74, samples=2 00:31:41.662 iops : min= 2980, max= 4188, avg=3584.00, stdev=854.18, samples=2 00:31:41.662 lat (msec) : 4=1.19%, 10=18.16%, 20=49.96%, 50=29.61%, 100=1.08% 00:31:41.662 cpu : usr=2.49%, sys=4.48%, ctx=371, majf=0, minf=1 00:31:41.662 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:31:41.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.662 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:41.662 issued rwts: total=3464,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.662 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:41.662 job2: (groupid=0, jobs=1): err= 0: pid=1421647: Wed Nov 20 07:27:45 2024 00:31:41.662 read: IOPS=5576, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1010msec) 00:31:41.662 slat (nsec): min=1121, max=10884k, avg=66388.72, stdev=606005.65 00:31:41.662 clat (usec): min=1602, max=67259, avg=11206.23, stdev=5159.11 00:31:41.662 lat (usec): min=1631, max=67264, avg=11272.62, stdev=5199.53 00:31:41.662 clat percentiles (usec): 00:31:41.662 | 1.00th=[ 3097], 5.00th=[ 4178], 10.00th=[ 6849], 20.00th=[ 8717], 00:31:41.662 | 30.00th=[ 9372], 40.00th=[10028], 50.00th=[10290], 60.00th=[10814], 00:31:41.662 | 70.00th=[11469], 80.00th=[13566], 90.00th=[16712], 95.00th=[19268], 00:31:41.662 | 99.00th=[27132], 99.50th=[35390], 99.90th=[60031], 99.95th=[60031], 00:31:41.662 | 99.99th=[67634] 00:31:41.662 write: IOPS=5748, BW=22.5MiB/s (23.5MB/s)(22.7MiB/1010msec); 0 zone resets 00:31:41.662 slat (usec): min=2, max=11088, avg=71.16, stdev=568.90 00:31:41.662 clat (usec): min=541, max=59118, avg=11220.02, stdev=7305.27 00:31:41.662 lat (usec): min=548, max=59125, avg=11291.18, stdev=7351.28 00:31:41.662 clat percentiles (usec): 00:31:41.662 | 1.00th=[ 2737], 5.00th=[ 5276], 10.00th=[ 6652], 20.00th=[ 8291], 00:31:41.662 | 30.00th=[ 8979], 40.00th=[ 9241], 50.00th=[ 9503], 60.00th=[10028], 00:31:41.662 | 70.00th=[10945], 80.00th=[12256], 90.00th=[15401], 95.00th=[20317], 00:31:41.662 | 99.00th=[54264], 99.50th=[58459], 99.90th=[58983], 99.95th=[58983], 00:31:41.662 | 99.99th=[58983] 00:31:41.662 bw ( KiB/s): min=20856, max=24576, per=35.19%, avg=22716.00, stdev=2630.44, samples=2 00:31:41.662 iops : min= 5214, max= 6144, avg=5679.00, stdev=657.61, samples=2 00:31:41.662 lat (usec) : 750=0.05% 00:31:41.663 lat (msec) : 2=0.57%, 4=3.23%, 10=46.60%, 20=45.19%, 50=3.51% 00:31:41.663 lat (msec) : 100=0.85% 00:31:41.663 cpu : usr=3.57%, sys=7.33%, ctx=444, majf=0, minf=2 00:31:41.663 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:31:41.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.663 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:41.663 issued rwts: total=5632,5806,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.663 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:41.663 job3: (groupid=0, jobs=1): err= 0: pid=1421648: Wed Nov 20 07:27:45 2024 00:31:41.663 read: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec) 00:31:41.663 slat (nsec): min=1815, max=15745k, avg=144439.47, stdev=927592.21 00:31:41.663 clat (usec): min=7926, max=91287, avg=16016.71, stdev=9210.26 00:31:41.663 lat (usec): min=7932, max=91296, avg=16161.15, stdev=9367.56 00:31:41.663 clat percentiles (usec): 00:31:41.663 | 1.00th=[ 9372], 5.00th=[10683], 10.00th=[11076], 20.00th=[11338], 00:31:41.663 | 30.00th=[11731], 40.00th=[12518], 50.00th=[13042], 60.00th=[13435], 00:31:41.663 | 70.00th=[14615], 80.00th=[18220], 90.00th=[25560], 95.00th=[30016], 00:31:41.663 | 99.00th=[58459], 99.50th=[80217], 99.90th=[91751], 99.95th=[91751], 00:31:41.663 | 99.99th=[91751] 00:31:41.663 write: IOPS=2792, BW=10.9MiB/s (11.4MB/s)(11.0MiB/1008msec); 0 zone resets 00:31:41.663 slat (usec): min=2, max=21651, avg=218.19, stdev=1092.05 00:31:41.663 clat (msec): min=5, max=105, avg=30.74, stdev=19.12 00:31:41.663 lat (msec): min=7, max=105, avg=30.96, stdev=19.23 00:31:41.663 clat percentiles (msec): 00:31:41.663 | 1.00th=[ 11], 5.00th=[ 12], 10.00th=[ 13], 20.00th=[ 15], 00:31:41.663 | 30.00th=[ 19], 40.00th=[ 23], 50.00th=[ 27], 60.00th=[ 29], 00:31:41.663 | 70.00th=[ 35], 80.00th=[ 43], 90.00th=[ 58], 95.00th=[ 72], 00:31:41.663 | 99.00th=[ 96], 99.50th=[ 103], 99.90th=[ 106], 99.95th=[ 106], 00:31:41.663 | 99.99th=[ 106] 00:31:41.663 bw ( KiB/s): min=10496, max=11000, per=16.65%, avg=10748.00, stdev=356.38, samples=2 00:31:41.663 iops : min= 2624, max= 2750, avg=2687.00, stdev=89.10, samples=2 00:31:41.663 lat (msec) : 10=1.47%, 20=55.68%, 50=35.20%, 100=7.37%, 250=0.28% 00:31:41.663 cpu : usr=2.78%, sys=3.28%, ctx=315, majf=0, minf=1 00:31:41.663 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:31:41.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.663 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:41.663 issued rwts: total=2560,2815,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.663 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:41.663 00:31:41.663 Run status group 0 (all jobs): 00:31:41.663 READ: bw=60.1MiB/s (63.1MB/s), 9.92MiB/s-21.8MiB/s (10.4MB/s-22.8MB/s), io=60.7MiB (63.7MB), run=1005-1010msec 00:31:41.663 WRITE: bw=63.0MiB/s (66.1MB/s), 10.9MiB/s-22.5MiB/s (11.4MB/s-23.5MB/s), io=63.7MiB (66.8MB), run=1005-1010msec 00:31:41.663 00:31:41.663 Disk stats (read/write): 00:31:41.663 nvme0n1: ios=3094/3199, merge=0/0, ticks=15149/41836, in_queue=56985, util=95.99% 00:31:41.663 nvme0n2: ios=2969/3072, merge=0/0, ticks=23919/39006, in_queue=62925, util=97.77% 00:31:41.663 nvme0n3: ios=5152/5134, merge=0/0, ticks=49470/40158, in_queue=89628, util=96.67% 00:31:41.663 nvme0n4: ios=2105/2407, merge=0/0, ticks=18174/34296, in_queue=52470, util=98.22% 00:31:41.663 07:27:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:31:41.663 07:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1421878 00:31:41.663 07:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:31:41.663 07:27:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:31:41.663 [global] 00:31:41.663 thread=1 00:31:41.663 invalidate=1 00:31:41.663 rw=read 00:31:41.663 time_based=1 00:31:41.663 runtime=10 00:31:41.663 ioengine=libaio 00:31:41.663 direct=1 00:31:41.663 bs=4096 00:31:41.663 iodepth=1 00:31:41.663 norandommap=1 00:31:41.663 numjobs=1 00:31:41.663 00:31:41.663 [job0] 00:31:41.663 filename=/dev/nvme0n1 00:31:41.663 [job1] 00:31:41.663 filename=/dev/nvme0n2 00:31:41.663 [job2] 00:31:41.663 filename=/dev/nvme0n3 00:31:41.663 [job3] 00:31:41.663 filename=/dev/nvme0n4 00:31:41.663 Could not set queue depth (nvme0n1) 00:31:41.663 Could not set queue depth (nvme0n2) 00:31:41.663 Could not set queue depth (nvme0n3) 00:31:41.663 Could not set queue depth (nvme0n4) 00:31:41.922 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:41.922 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:41.922 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:41.922 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:41.922 fio-3.35 00:31:41.922 Starting 4 threads 00:31:45.213 07:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:31:45.213 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=40394752, buflen=4096 00:31:45.213 fio: pid=1422023, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:45.213 07:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:31:45.213 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=46895104, buflen=4096 00:31:45.213 fio: pid=1422022, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:45.214 07:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:45.214 07:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:31:45.214 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=3043328, buflen=4096 00:31:45.214 fio: pid=1422020, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:45.214 07:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:45.214 07:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:31:45.476 07:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:45.476 07:27:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:31:45.476 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=23232512, buflen=4096 00:31:45.476 fio: pid=1422021, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:45.476 00:31:45.476 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1422020: Wed Nov 20 07:27:49 2024 00:31:45.476 read: IOPS=242, BW=967KiB/s (990kB/s)(2972KiB/3073msec) 00:31:45.476 slat (usec): min=2, max=13198, avg=52.72, stdev=710.23 00:31:45.476 clat (usec): min=182, max=42911, avg=4052.99, stdev=11854.06 00:31:45.476 lat (usec): min=189, max=42933, avg=4105.77, stdev=11862.49 00:31:45.476 clat percentiles (usec): 00:31:45.476 | 1.00th=[ 190], 5.00th=[ 194], 10.00th=[ 196], 20.00th=[ 200], 00:31:45.476 | 30.00th=[ 231], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 249], 00:31:45.476 | 70.00th=[ 253], 80.00th=[ 260], 90.00th=[ 392], 95.00th=[40633], 00:31:45.476 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:31:45.476 | 99.99th=[42730] 00:31:45.476 bw ( KiB/s): min= 168, max= 4086, per=2.75%, avg=917.00, stdev=1561.16, samples=6 00:31:45.476 iops : min= 42, max= 1021, avg=229.17, stdev=390.09, samples=6 00:31:45.476 lat (usec) : 250=62.10%, 500=28.23%, 1000=0.13% 00:31:45.476 lat (msec) : 50=9.41% 00:31:45.476 cpu : usr=0.03%, sys=0.26%, ctx=749, majf=0, minf=1 00:31:45.476 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:45.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.477 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.477 issued rwts: total=744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.477 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:45.477 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1422021: Wed Nov 20 07:27:49 2024 00:31:45.477 read: IOPS=1708, BW=6834KiB/s (6998kB/s)(22.2MiB/3320msec) 00:31:45.477 slat (usec): min=6, max=11652, avg=13.79, stdev=208.08 00:31:45.477 clat (usec): min=183, max=42179, avg=566.06, stdev=3585.26 00:31:45.477 lat (usec): min=192, max=52855, avg=579.85, stdev=3631.11 00:31:45.477 clat percentiles (usec): 00:31:45.477 | 1.00th=[ 190], 5.00th=[ 198], 10.00th=[ 206], 20.00th=[ 217], 00:31:45.477 | 30.00th=[ 225], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 247], 00:31:45.477 | 70.00th=[ 260], 80.00th=[ 285], 90.00th=[ 314], 95.00th=[ 326], 00:31:45.477 | 99.00th=[ 486], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:31:45.477 | 99.99th=[42206] 00:31:45.477 bw ( KiB/s): min= 104, max=15560, per=22.56%, avg=7537.17, stdev=7849.63, samples=6 00:31:45.477 iops : min= 26, max= 3890, avg=1884.17, stdev=1962.53, samples=6 00:31:45.477 lat (usec) : 250=63.37%, 500=35.71%, 750=0.12% 00:31:45.477 lat (msec) : 50=0.78% 00:31:45.477 cpu : usr=0.66%, sys=1.78%, ctx=5677, majf=0, minf=1 00:31:45.477 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:45.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.477 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.477 issued rwts: total=5673,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.477 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:45.477 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1422022: Wed Nov 20 07:27:49 2024 00:31:45.477 read: IOPS=3985, BW=15.6MiB/s (16.3MB/s)(44.7MiB/2873msec) 00:31:45.477 slat (nsec): min=6067, max=33161, avg=7400.02, stdev=840.29 00:31:45.477 clat (usec): min=192, max=570, avg=240.72, stdev=29.50 00:31:45.477 lat (usec): min=199, max=603, avg=248.12, stdev=29.55 00:31:45.477 clat percentiles (usec): 00:31:45.477 | 1.00th=[ 198], 5.00th=[ 202], 10.00th=[ 208], 20.00th=[ 221], 00:31:45.477 | 30.00th=[ 225], 40.00th=[ 231], 50.00th=[ 237], 60.00th=[ 243], 00:31:45.477 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 277], 95.00th=[ 310], 00:31:45.477 | 99.00th=[ 326], 99.50th=[ 330], 99.90th=[ 457], 99.95th=[ 490], 00:31:45.477 | 99.99th=[ 506] 00:31:45.477 bw ( KiB/s): min=15296, max=17064, per=47.30%, avg=15800.00, stdev=719.00, samples=5 00:31:45.477 iops : min= 3824, max= 4266, avg=3950.00, stdev=179.75, samples=5 00:31:45.477 lat (usec) : 250=73.96%, 500=26.00%, 750=0.03% 00:31:45.477 cpu : usr=1.01%, sys=3.55%, ctx=11450, majf=0, minf=2 00:31:45.477 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:45.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.477 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.477 issued rwts: total=11450,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.477 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:45.477 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1422023: Wed Nov 20 07:27:49 2024 00:31:45.477 read: IOPS=3688, BW=14.4MiB/s (15.1MB/s)(38.5MiB/2674msec) 00:31:45.477 slat (nsec): min=3216, max=32647, avg=7593.49, stdev=958.34 00:31:45.477 clat (usec): min=198, max=40704, avg=260.32, stdev=407.55 00:31:45.477 lat (usec): min=202, max=40711, avg=267.91, stdev=407.56 00:31:45.477 clat percentiles (usec): 00:31:45.477 | 1.00th=[ 223], 5.00th=[ 241], 10.00th=[ 243], 20.00th=[ 247], 00:31:45.477 | 30.00th=[ 251], 40.00th=[ 253], 50.00th=[ 255], 60.00th=[ 258], 00:31:45.477 | 70.00th=[ 262], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 277], 00:31:45.477 | 99.00th=[ 289], 99.50th=[ 293], 99.90th=[ 433], 99.95th=[ 529], 00:31:45.477 | 99.99th=[40633] 00:31:45.477 bw ( KiB/s): min=13840, max=15336, per=44.56%, avg=14886.40, stdev=596.13, samples=5 00:31:45.477 iops : min= 3460, max= 3834, avg=3721.60, stdev=149.03, samples=5 00:31:45.477 lat (usec) : 250=30.02%, 500=69.90%, 750=0.06% 00:31:45.477 lat (msec) : 50=0.01% 00:31:45.477 cpu : usr=1.16%, sys=3.18%, ctx=9865, majf=0, minf=2 00:31:45.477 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:45.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.477 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.477 issued rwts: total=9863,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.477 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:45.477 00:31:45.477 Run status group 0 (all jobs): 00:31:45.477 READ: bw=32.6MiB/s (34.2MB/s), 967KiB/s-15.6MiB/s (990kB/s-16.3MB/s), io=108MiB (114MB), run=2674-3320msec 00:31:45.477 00:31:45.477 Disk stats (read/write): 00:31:45.477 nvme0n1: ios=743/0, merge=0/0, ticks=3005/0, in_queue=3005, util=93.31% 00:31:45.477 nvme0n2: ios=5702/0, merge=0/0, ticks=3349/0, in_queue=3349, util=99.11% 00:31:45.477 nvme0n3: ios=11247/0, merge=0/0, ticks=2662/0, in_queue=2662, util=96.20% 00:31:45.477 nvme0n4: ios=9531/0, merge=0/0, ticks=2599/0, in_queue=2599, util=99.96% 00:31:45.735 07:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:45.735 07:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:31:45.735 07:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:45.735 07:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:31:45.993 07:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:45.993 07:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:31:46.251 07:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:46.251 07:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:31:46.509 07:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:31:46.509 07:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1421878 00:31:46.509 07:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:31:46.509 07:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:46.509 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:46.509 07:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:46.509 07:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:31:46.509 07:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:31:46.509 07:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:46.509 07:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:31:46.509 07:27:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:46.509 07:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:31:46.509 07:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:31:46.509 07:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:31:46.509 nvmf hotplug test: fio failed as expected 00:31:46.509 07:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:46.767 07:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:31:46.767 07:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:31:46.767 07:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:31:46.767 07:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:31:46.767 07:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:31:46.767 07:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:46.767 07:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:31:46.767 07:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:46.767 07:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:31:46.767 07:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:46.767 07:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:46.767 rmmod nvme_tcp 00:31:46.767 rmmod nvme_fabrics 00:31:46.767 rmmod nvme_keyring 00:31:46.767 07:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:46.767 07:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:31:46.767 07:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:31:46.767 07:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1419194 ']' 00:31:46.767 07:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1419194 00:31:46.767 07:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 1419194 ']' 00:31:46.767 07:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 1419194 00:31:46.767 07:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:31:46.767 07:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:46.767 07:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1419194 00:31:47.026 07:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:47.026 07:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:47.026 07:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1419194' 00:31:47.026 killing process with pid 1419194 00:31:47.026 07:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 1419194 00:31:47.026 07:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 1419194 00:31:47.026 07:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:47.026 07:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:47.026 07:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:47.026 07:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:31:47.026 07:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:31:47.026 07:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:47.026 07:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:31:47.026 07:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:47.026 07:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:47.026 07:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:47.026 07:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:47.026 07:27:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:49.561 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:49.561 00:31:49.561 real 0m25.944s 00:31:49.561 user 1m29.570s 00:31:49.561 sys 0m11.254s 00:31:49.561 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:49.561 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:49.561 ************************************ 00:31:49.561 END TEST nvmf_fio_target 00:31:49.561 ************************************ 00:31:49.561 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:49.561 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:49.561 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:49.561 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:49.561 ************************************ 00:31:49.561 START TEST nvmf_bdevio 00:31:49.561 ************************************ 00:31:49.561 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:49.561 * Looking for test storage... 00:31:49.561 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:49.561 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:49.561 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:31:49.561 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:49.561 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:49.561 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:49.561 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:49.561 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:49.561 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:31:49.561 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:31:49.561 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:31:49.561 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:31:49.561 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:31:49.561 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:31:49.561 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:31:49.561 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:49.561 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:31:49.561 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:31:49.561 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:49.561 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:49.561 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:31:49.561 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:31:49.561 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:49.561 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:31:49.561 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:31:49.561 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:31:49.561 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:31:49.561 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:49.561 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:31:49.561 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:31:49.561 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:49.561 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:49.561 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:31:49.561 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:49.561 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:49.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.561 --rc genhtml_branch_coverage=1 00:31:49.561 --rc genhtml_function_coverage=1 00:31:49.561 --rc genhtml_legend=1 00:31:49.561 --rc geninfo_all_blocks=1 00:31:49.561 --rc geninfo_unexecuted_blocks=1 00:31:49.561 00:31:49.561 ' 00:31:49.561 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:49.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.561 --rc genhtml_branch_coverage=1 00:31:49.561 --rc genhtml_function_coverage=1 00:31:49.561 --rc genhtml_legend=1 00:31:49.561 --rc geninfo_all_blocks=1 00:31:49.561 --rc geninfo_unexecuted_blocks=1 00:31:49.561 00:31:49.561 ' 00:31:49.561 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:49.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.561 --rc genhtml_branch_coverage=1 00:31:49.561 --rc genhtml_function_coverage=1 00:31:49.561 --rc genhtml_legend=1 00:31:49.561 --rc geninfo_all_blocks=1 00:31:49.561 --rc geninfo_unexecuted_blocks=1 00:31:49.561 00:31:49.561 ' 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:49.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.562 --rc genhtml_branch_coverage=1 00:31:49.562 --rc genhtml_function_coverage=1 00:31:49.562 --rc genhtml_legend=1 00:31:49.562 --rc geninfo_all_blocks=1 00:31:49.562 --rc geninfo_unexecuted_blocks=1 00:31:49.562 00:31:49.562 ' 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:31:49.562 07:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:56.131 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:56.131 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:31:56.131 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:56.131 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:56.131 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:56.131 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:56.131 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:56.131 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:56.132 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:56.132 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:56.132 Found net devices under 0000:86:00.0: cvl_0_0 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:56.132 Found net devices under 0000:86:00.1: cvl_0_1 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:56.132 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:56.132 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:31:56.132 00:31:56.132 --- 10.0.0.2 ping statistics --- 00:31:56.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:56.132 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:31:56.132 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:56.132 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:56.132 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:31:56.132 00:31:56.132 --- 10.0.0.1 ping statistics --- 00:31:56.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:56.132 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:31:56.133 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:56.133 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:31:56.133 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:56.133 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:56.133 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:56.133 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:56.133 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:56.133 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:56.133 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:56.133 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:31:56.133 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:56.133 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:56.133 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:56.133 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1426257 00:31:56.133 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1426257 00:31:56.133 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:31:56.133 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 1426257 ']' 00:31:56.133 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:56.133 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:56.133 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:56.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:56.133 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:56.133 07:27:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:56.133 [2024-11-20 07:27:59.810582] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:56.133 [2024-11-20 07:27:59.811496] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:31:56.133 [2024-11-20 07:27:59.811530] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:56.133 [2024-11-20 07:27:59.884607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:56.133 [2024-11-20 07:27:59.942515] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:56.133 [2024-11-20 07:27:59.942558] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:56.133 [2024-11-20 07:27:59.942571] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:56.133 [2024-11-20 07:27:59.942581] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:56.133 [2024-11-20 07:27:59.942594] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:56.133 [2024-11-20 07:27:59.944826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:56.133 [2024-11-20 07:27:59.944936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:56.133 [2024-11-20 07:27:59.945042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:56.133 [2024-11-20 07:27:59.945046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:56.133 [2024-11-20 07:28:00.029318] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:56.133 [2024-11-20 07:28:00.030157] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:56.133 [2024-11-20 07:28:00.030286] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:56.133 [2024-11-20 07:28:00.030633] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:56.133 [2024-11-20 07:28:00.030691] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:56.133 07:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:56.133 07:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:31:56.133 07:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:56.133 07:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:56.133 07:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:56.133 07:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:56.133 07:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:56.133 07:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.133 07:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:56.133 [2024-11-20 07:28:00.097915] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:56.133 07:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.133 07:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:56.133 07:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.133 07:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:56.133 Malloc0 00:31:56.133 07:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.133 07:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:56.133 07:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.133 07:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:56.133 07:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.133 07:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:56.133 07:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.133 07:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:56.133 07:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.133 07:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:56.133 07:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.133 07:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:56.133 [2024-11-20 07:28:00.182104] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:56.133 07:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.133 07:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:31:56.133 07:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:31:56.133 07:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:31:56.133 07:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:31:56.133 07:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:56.133 07:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:56.133 { 00:31:56.133 "params": { 00:31:56.133 "name": "Nvme$subsystem", 00:31:56.133 "trtype": "$TEST_TRANSPORT", 00:31:56.133 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:56.133 "adrfam": "ipv4", 00:31:56.133 "trsvcid": "$NVMF_PORT", 00:31:56.133 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:56.133 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:56.133 "hdgst": ${hdgst:-false}, 00:31:56.133 "ddgst": ${ddgst:-false} 00:31:56.133 }, 00:31:56.133 "method": "bdev_nvme_attach_controller" 00:31:56.133 } 00:31:56.133 EOF 00:31:56.133 )") 00:31:56.133 07:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:31:56.133 07:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:31:56.133 07:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:31:56.133 07:28:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:56.133 "params": { 00:31:56.133 "name": "Nvme1", 00:31:56.133 "trtype": "tcp", 00:31:56.133 "traddr": "10.0.0.2", 00:31:56.133 "adrfam": "ipv4", 00:31:56.133 "trsvcid": "4420", 00:31:56.133 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:56.133 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:56.133 "hdgst": false, 00:31:56.133 "ddgst": false 00:31:56.133 }, 00:31:56.133 "method": "bdev_nvme_attach_controller" 00:31:56.133 }' 00:31:56.133 [2024-11-20 07:28:00.232511] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:31:56.133 [2024-11-20 07:28:00.232563] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1426283 ] 00:31:56.133 [2024-11-20 07:28:00.308704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:56.133 [2024-11-20 07:28:00.353310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:56.133 [2024-11-20 07:28:00.353419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:56.133 [2024-11-20 07:28:00.353419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:56.134 I/O targets: 00:31:56.134 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:31:56.134 00:31:56.134 00:31:56.134 CUnit - A unit testing framework for C - Version 2.1-3 00:31:56.134 http://cunit.sourceforge.net/ 00:31:56.134 00:31:56.134 00:31:56.134 Suite: bdevio tests on: Nvme1n1 00:31:56.134 Test: blockdev write read block ...passed 00:31:56.134 Test: blockdev write zeroes read block ...passed 00:31:56.134 Test: blockdev write zeroes read no split ...passed 00:31:56.134 Test: blockdev write zeroes read split ...passed 00:31:56.134 Test: blockdev write zeroes read split partial ...passed 00:31:56.134 Test: blockdev reset ...[2024-11-20 07:28:00.655490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:31:56.134 [2024-11-20 07:28:00.655554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d4340 (9): Bad file descriptor 00:31:56.134 [2024-11-20 07:28:00.666887] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:31:56.134 passed 00:31:56.391 Test: blockdev write read 8 blocks ...passed 00:31:56.391 Test: blockdev write read size > 128k ...passed 00:31:56.391 Test: blockdev write read invalid size ...passed 00:31:56.391 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:56.391 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:56.391 Test: blockdev write read max offset ...passed 00:31:56.391 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:56.391 Test: blockdev writev readv 8 blocks ...passed 00:31:56.391 Test: blockdev writev readv 30 x 1block ...passed 00:31:56.391 Test: blockdev writev readv block ...passed 00:31:56.391 Test: blockdev writev readv size > 128k ...passed 00:31:56.391 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:56.391 Test: blockdev comparev and writev ...[2024-11-20 07:28:00.921995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:56.391 [2024-11-20 07:28:00.922021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:56.391 [2024-11-20 07:28:00.922035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:56.391 [2024-11-20 07:28:00.922043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:56.391 [2024-11-20 07:28:00.922327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:56.391 [2024-11-20 07:28:00.922337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:56.391 [2024-11-20 07:28:00.922349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:56.391 [2024-11-20 07:28:00.922356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:56.391 [2024-11-20 07:28:00.922650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:56.391 [2024-11-20 07:28:00.922661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:56.391 [2024-11-20 07:28:00.922674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:56.391 [2024-11-20 07:28:00.922681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:56.391 [2024-11-20 07:28:00.922981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:56.391 [2024-11-20 07:28:00.922992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:56.391 [2024-11-20 07:28:00.923004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:56.391 [2024-11-20 07:28:00.923012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:56.649 passed 00:31:56.649 Test: blockdev nvme passthru rw ...passed 00:31:56.649 Test: blockdev nvme passthru vendor specific ...[2024-11-20 07:28:01.006246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:56.649 [2024-11-20 07:28:01.006270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:56.649 [2024-11-20 07:28:01.006390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:56.649 [2024-11-20 07:28:01.006400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:56.649 [2024-11-20 07:28:01.006508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:56.649 [2024-11-20 07:28:01.006521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:56.649 [2024-11-20 07:28:01.006636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:56.649 [2024-11-20 07:28:01.006645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:56.649 passed 00:31:56.649 Test: blockdev nvme admin passthru ...passed 00:31:56.649 Test: blockdev copy ...passed 00:31:56.649 00:31:56.649 Run Summary: Type Total Ran Passed Failed Inactive 00:31:56.649 suites 1 1 n/a 0 0 00:31:56.649 tests 23 23 23 0 0 00:31:56.649 asserts 152 152 152 0 n/a 00:31:56.649 00:31:56.649 Elapsed time = 1.129 seconds 00:31:56.908 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:56.908 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.908 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:56.908 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.908 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:31:56.908 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:31:56.908 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:56.908 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:31:56.908 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:56.908 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:31:56.908 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:56.908 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:56.908 rmmod nvme_tcp 00:31:56.908 rmmod nvme_fabrics 00:31:56.908 rmmod nvme_keyring 00:31:56.908 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:56.908 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:31:56.908 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:31:56.908 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1426257 ']' 00:31:56.908 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1426257 00:31:56.908 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 1426257 ']' 00:31:56.908 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 1426257 00:31:56.908 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:31:56.908 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:56.908 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1426257 00:31:56.908 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:31:56.908 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:31:56.908 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1426257' 00:31:56.908 killing process with pid 1426257 00:31:56.908 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 1426257 00:31:56.908 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 1426257 00:31:57.167 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:57.167 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:57.167 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:57.167 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:31:57.167 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:31:57.167 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:31:57.167 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:57.167 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:57.167 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:57.167 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:57.167 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:57.167 07:28:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:59.072 07:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:59.072 00:31:59.072 real 0m9.954s 00:31:59.072 user 0m8.507s 00:31:59.072 sys 0m5.278s 00:31:59.072 07:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:59.072 07:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:59.072 ************************************ 00:31:59.072 END TEST nvmf_bdevio 00:31:59.072 ************************************ 00:31:59.332 07:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:31:59.332 00:31:59.332 real 4m32.746s 00:31:59.332 user 9m1.869s 00:31:59.332 sys 1m51.938s 00:31:59.332 07:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:59.332 07:28:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:59.332 ************************************ 00:31:59.332 END TEST nvmf_target_core_interrupt_mode 00:31:59.332 ************************************ 00:31:59.332 07:28:03 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:31:59.332 07:28:03 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:59.332 07:28:03 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:59.332 07:28:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:59.332 ************************************ 00:31:59.332 START TEST nvmf_interrupt 00:31:59.332 ************************************ 00:31:59.332 07:28:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:31:59.332 * Looking for test storage... 00:31:59.332 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:59.332 07:28:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:59.332 07:28:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:31:59.332 07:28:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:59.332 07:28:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:59.332 07:28:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:59.332 07:28:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:59.332 07:28:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:59.332 07:28:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:31:59.332 07:28:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:31:59.332 07:28:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:31:59.332 07:28:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:31:59.332 07:28:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:31:59.332 07:28:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:31:59.332 07:28:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:31:59.332 07:28:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:59.332 07:28:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:31:59.332 07:28:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:31:59.332 07:28:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:59.332 07:28:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:59.332 07:28:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:31:59.332 07:28:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:31:59.332 07:28:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:59.332 07:28:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:31:59.332 07:28:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:31:59.332 07:28:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:31:59.332 07:28:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:31:59.332 07:28:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:59.332 07:28:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:31:59.332 07:28:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:31:59.332 07:28:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:59.332 07:28:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:59.332 07:28:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:31:59.332 07:28:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:59.332 07:28:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:59.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:59.332 --rc genhtml_branch_coverage=1 00:31:59.332 --rc genhtml_function_coverage=1 00:31:59.332 --rc genhtml_legend=1 00:31:59.332 --rc geninfo_all_blocks=1 00:31:59.332 --rc geninfo_unexecuted_blocks=1 00:31:59.332 00:31:59.332 ' 00:31:59.332 07:28:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:59.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:59.332 --rc genhtml_branch_coverage=1 00:31:59.332 --rc genhtml_function_coverage=1 00:31:59.332 --rc genhtml_legend=1 00:31:59.332 --rc geninfo_all_blocks=1 00:31:59.332 --rc geninfo_unexecuted_blocks=1 00:31:59.332 00:31:59.332 ' 00:31:59.333 07:28:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:59.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:59.333 --rc genhtml_branch_coverage=1 00:31:59.333 --rc genhtml_function_coverage=1 00:31:59.333 --rc genhtml_legend=1 00:31:59.333 --rc geninfo_all_blocks=1 00:31:59.333 --rc geninfo_unexecuted_blocks=1 00:31:59.333 00:31:59.333 ' 00:31:59.333 07:28:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:59.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:59.333 --rc genhtml_branch_coverage=1 00:31:59.333 --rc genhtml_function_coverage=1 00:31:59.333 --rc genhtml_legend=1 00:31:59.333 --rc geninfo_all_blocks=1 00:31:59.333 --rc geninfo_unexecuted_blocks=1 00:31:59.333 00:31:59.333 ' 00:31:59.333 07:28:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:59.333 07:28:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:31:59.333 07:28:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:59.333 07:28:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:59.333 07:28:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:59.333 07:28:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:59.333 07:28:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:59.592 07:28:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:59.592 07:28:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:59.592 07:28:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:59.592 07:28:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:59.592 07:28:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:59.592 07:28:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:59.592 07:28:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:59.592 07:28:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:59.592 07:28:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:59.592 07:28:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:59.592 07:28:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:59.592 07:28:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:59.592 07:28:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:31:59.592 07:28:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:59.592 07:28:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:59.592 07:28:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:59.592 07:28:03 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.592 07:28:03 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.592 07:28:03 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.592 07:28:03 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:31:59.592 07:28:03 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.592 07:28:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:31:59.592 07:28:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:59.592 07:28:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:59.592 07:28:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:59.592 07:28:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:59.592 07:28:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:59.592 07:28:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:59.592 07:28:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:59.592 07:28:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:59.592 07:28:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:59.592 07:28:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:59.592 07:28:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:31:59.592 07:28:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:59.592 07:28:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:31:59.592 07:28:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:59.592 07:28:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:59.592 07:28:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:59.592 07:28:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:59.592 07:28:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:59.593 07:28:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:59.593 07:28:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:59.593 07:28:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:59.593 07:28:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:59.593 07:28:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:59.593 07:28:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:31:59.593 07:28:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:06.260 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:06.260 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:06.260 Found net devices under 0000:86:00.0: cvl_0_0 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:06.260 Found net devices under 0000:86:00.1: cvl_0_1 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:06.260 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:06.261 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:06.261 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:06.261 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:06.261 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:06.261 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:06.261 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:06.261 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:06.261 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:06.261 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:06.261 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:06.261 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:06.261 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:06.261 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:06.261 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:06.261 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.468 ms 00:32:06.261 00:32:06.261 --- 10.0.0.2 ping statistics --- 00:32:06.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:06.261 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:32:06.261 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:06.261 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:06.261 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:32:06.261 00:32:06.261 --- 10.0.0.1 ping statistics --- 00:32:06.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:06.261 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:32:06.261 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:06.261 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:32:06.261 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:06.261 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:06.261 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:06.261 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:06.261 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:06.261 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:06.261 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:06.261 07:28:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:32:06.261 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:06.261 07:28:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:06.261 07:28:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:06.261 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=1430053 00:32:06.261 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 1430053 00:32:06.261 07:28:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:06.261 07:28:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@833 -- # '[' -z 1430053 ']' 00:32:06.261 07:28:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:06.261 07:28:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:06.261 07:28:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:06.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:06.261 07:28:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:06.261 07:28:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:06.261 [2024-11-20 07:28:09.874368] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:06.261 [2024-11-20 07:28:09.875355] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:32:06.261 [2024-11-20 07:28:09.875394] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:06.261 [2024-11-20 07:28:09.952176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:06.261 [2024-11-20 07:28:09.993392] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:06.261 [2024-11-20 07:28:09.993427] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:06.261 [2024-11-20 07:28:09.993434] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:06.261 [2024-11-20 07:28:09.993440] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:06.261 [2024-11-20 07:28:09.993445] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:06.261 [2024-11-20 07:28:09.994621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:06.261 [2024-11-20 07:28:09.994622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:06.261 [2024-11-20 07:28:10.065875] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:06.261 [2024-11-20 07:28:10.066396] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:06.261 [2024-11-20 07:28:10.066634] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:06.261 07:28:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:06.261 07:28:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@866 -- # return 0 00:32:06.261 07:28:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:06.261 07:28:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:06.261 07:28:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:06.261 07:28:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:06.261 07:28:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:32:06.261 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:32:06.261 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:32:06.261 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:32:06.261 5000+0 records in 00:32:06.261 5000+0 records out 00:32:06.261 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0179231 s, 571 MB/s 00:32:06.261 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:32:06.261 07:28:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.261 07:28:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:06.261 AIO0 00:32:06.261 07:28:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.261 07:28:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:32:06.261 07:28:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.261 07:28:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:06.261 [2024-11-20 07:28:10.195450] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:06.261 07:28:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.261 07:28:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:06.261 07:28:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.261 07:28:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:06.261 07:28:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.261 07:28:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:32:06.261 07:28:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.261 07:28:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:06.261 07:28:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.261 07:28:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:06.261 07:28:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.261 07:28:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:06.261 [2024-11-20 07:28:10.235817] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:06.261 07:28:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.261 07:28:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:06.261 07:28:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1430053 0 00:32:06.261 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1430053 0 idle 00:32:06.261 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1430053 00:32:06.261 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:06.261 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1430053 -w 256 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1430053 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.25 reactor_0' 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1430053 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.25 reactor_0 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1430053 1 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1430053 1 idle 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1430053 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1430053 -w 256 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1430058 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1' 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1430058 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1430139 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1430053 0 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1430053 0 busy 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1430053 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1430053 -w 256 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1430053 root 20 0 128.2g 46848 33792 S 13.3 0.0 0:00.27 reactor_0' 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1430053 root 20 0 128.2g 46848 33792 S 13.3 0.0 0:00.27 reactor_0 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=13.3 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=13 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:06.262 07:28:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:32:07.631 07:28:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:32:07.631 07:28:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:07.631 07:28:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1430053 -w 256 00:32:07.631 07:28:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:07.631 07:28:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1430053 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:02.56 reactor_0' 00:32:07.631 07:28:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1430053 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:02.56 reactor_0 00:32:07.631 07:28:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:07.631 07:28:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:07.631 07:28:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:32:07.631 07:28:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:32:07.631 07:28:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:07.631 07:28:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:07.631 07:28:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:07.631 07:28:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:07.631 07:28:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:07.631 07:28:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:07.631 07:28:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1430053 1 00:32:07.631 07:28:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1430053 1 busy 00:32:07.631 07:28:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1430053 00:32:07.631 07:28:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:07.631 07:28:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:07.631 07:28:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:07.631 07:28:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:07.631 07:28:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:07.631 07:28:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:07.631 07:28:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:07.631 07:28:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:07.631 07:28:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1430053 -w 256 00:32:07.631 07:28:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:07.631 07:28:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1430058 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:01.34 reactor_1' 00:32:07.631 07:28:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1430058 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:01.34 reactor_1 00:32:07.631 07:28:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:07.631 07:28:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:07.631 07:28:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:32:07.631 07:28:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:32:07.631 07:28:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:07.631 07:28:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:07.631 07:28:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:07.631 07:28:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:07.631 07:28:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1430139 00:32:17.598 Initializing NVMe Controllers 00:32:17.598 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:17.598 Controller IO queue size 256, less than required. 00:32:17.598 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:17.598 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:17.598 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:17.598 Initialization complete. Launching workers. 00:32:17.598 ======================================================== 00:32:17.598 Latency(us) 00:32:17.598 Device Information : IOPS MiB/s Average min max 00:32:17.598 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16459.35 64.29 15561.01 3054.07 31937.57 00:32:17.598 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16612.35 64.89 15414.41 7690.07 27747.39 00:32:17.598 ======================================================== 00:32:17.598 Total : 33071.69 129.19 15487.37 3054.07 31937.57 00:32:17.598 00:32:17.598 07:28:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:17.598 07:28:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1430053 0 00:32:17.598 07:28:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1430053 0 idle 00:32:17.598 07:28:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1430053 00:32:17.598 07:28:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:17.598 07:28:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:17.598 07:28:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:17.598 07:28:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:17.598 07:28:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:17.598 07:28:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:17.598 07:28:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:17.598 07:28:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:17.598 07:28:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:17.598 07:28:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1430053 -w 256 00:32:17.598 07:28:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:17.598 07:28:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1430053 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.25 reactor_0' 00:32:17.598 07:28:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1430053 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.25 reactor_0 00:32:17.598 07:28:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:17.598 07:28:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:17.598 07:28:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:17.598 07:28:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:17.598 07:28:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:17.598 07:28:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:17.598 07:28:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:17.598 07:28:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:17.599 07:28:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:17.599 07:28:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1430053 1 00:32:17.599 07:28:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1430053 1 idle 00:32:17.599 07:28:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1430053 00:32:17.599 07:28:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:17.599 07:28:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:17.599 07:28:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:17.599 07:28:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:17.599 07:28:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:17.599 07:28:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:17.599 07:28:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:17.599 07:28:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:17.599 07:28:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:17.599 07:28:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1430053 -w 256 00:32:17.599 07:28:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:17.599 07:28:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1430058 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1' 00:32:17.599 07:28:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1430058 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1 00:32:17.599 07:28:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:17.599 07:28:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:17.599 07:28:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:17.599 07:28:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:17.599 07:28:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:17.599 07:28:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:17.599 07:28:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:17.599 07:28:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:17.599 07:28:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:17.599 07:28:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:32:17.599 07:28:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # local i=0 00:32:17.599 07:28:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:32:17.599 07:28:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:32:17.599 07:28:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # sleep 2 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # return 0 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1430053 0 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1430053 0 idle 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1430053 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1430053 -w 256 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1430053 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:20.51 reactor_0' 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1430053 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:20.51 reactor_0 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1430053 1 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1430053 1 idle 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1430053 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1430053 -w 256 00:32:19.507 07:28:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:19.507 07:28:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1430058 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.10 reactor_1' 00:32:19.507 07:28:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1430058 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.10 reactor_1 00:32:19.507 07:28:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:19.507 07:28:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:19.507 07:28:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:19.507 07:28:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:19.507 07:28:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:19.507 07:28:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:19.507 07:28:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:19.507 07:28:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:19.507 07:28:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:19.767 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:19.767 07:28:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:19.767 07:28:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1221 -- # local i=0 00:32:19.767 07:28:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:32:19.767 07:28:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:19.767 07:28:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:19.767 07:28:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:32:19.767 07:28:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1233 -- # return 0 00:32:19.767 07:28:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:32:19.767 07:28:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:32:19.768 07:28:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:19.768 07:28:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:32:19.768 07:28:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:19.768 07:28:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:32:19.768 07:28:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:19.768 07:28:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:19.768 rmmod nvme_tcp 00:32:19.768 rmmod nvme_fabrics 00:32:19.768 rmmod nvme_keyring 00:32:19.768 07:28:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:19.768 07:28:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:32:19.768 07:28:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:32:19.768 07:28:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 1430053 ']' 00:32:19.768 07:28:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 1430053 00:32:19.768 07:28:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@952 -- # '[' -z 1430053 ']' 00:32:19.768 07:28:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # kill -0 1430053 00:32:19.768 07:28:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # uname 00:32:19.768 07:28:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:19.768 07:28:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1430053 00:32:19.768 07:28:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:19.768 07:28:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:19.768 07:28:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1430053' 00:32:19.768 killing process with pid 1430053 00:32:19.768 07:28:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@971 -- # kill 1430053 00:32:19.768 07:28:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@976 -- # wait 1430053 00:32:20.027 07:28:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:20.027 07:28:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:20.027 07:28:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:20.027 07:28:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:32:20.027 07:28:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:32:20.027 07:28:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:20.027 07:28:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:32:20.027 07:28:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:20.027 07:28:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:20.027 07:28:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:20.027 07:28:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:20.027 07:28:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:22.561 07:28:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:22.561 00:32:22.561 real 0m22.901s 00:32:22.561 user 0m39.597s 00:32:22.561 sys 0m8.655s 00:32:22.561 07:28:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:22.561 07:28:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:22.561 ************************************ 00:32:22.561 END TEST nvmf_interrupt 00:32:22.561 ************************************ 00:32:22.561 00:32:22.561 real 27m32.341s 00:32:22.561 user 57m3.192s 00:32:22.561 sys 9m23.129s 00:32:22.561 07:28:26 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:22.561 07:28:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:22.561 ************************************ 00:32:22.561 END TEST nvmf_tcp 00:32:22.561 ************************************ 00:32:22.561 07:28:26 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:32:22.561 07:28:26 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:22.561 07:28:26 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:32:22.561 07:28:26 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:22.561 07:28:26 -- common/autotest_common.sh@10 -- # set +x 00:32:22.561 ************************************ 00:32:22.561 START TEST spdkcli_nvmf_tcp 00:32:22.561 ************************************ 00:32:22.561 07:28:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:22.561 * Looking for test storage... 00:32:22.561 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:32:22.561 07:28:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:22.561 07:28:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:32:22.561 07:28:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:22.561 07:28:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:22.561 07:28:26 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:22.561 07:28:26 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:22.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.562 --rc genhtml_branch_coverage=1 00:32:22.562 --rc genhtml_function_coverage=1 00:32:22.562 --rc genhtml_legend=1 00:32:22.562 --rc geninfo_all_blocks=1 00:32:22.562 --rc geninfo_unexecuted_blocks=1 00:32:22.562 00:32:22.562 ' 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:22.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.562 --rc genhtml_branch_coverage=1 00:32:22.562 --rc genhtml_function_coverage=1 00:32:22.562 --rc genhtml_legend=1 00:32:22.562 --rc geninfo_all_blocks=1 00:32:22.562 --rc geninfo_unexecuted_blocks=1 00:32:22.562 00:32:22.562 ' 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:22.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.562 --rc genhtml_branch_coverage=1 00:32:22.562 --rc genhtml_function_coverage=1 00:32:22.562 --rc genhtml_legend=1 00:32:22.562 --rc geninfo_all_blocks=1 00:32:22.562 --rc geninfo_unexecuted_blocks=1 00:32:22.562 00:32:22.562 ' 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:22.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.562 --rc genhtml_branch_coverage=1 00:32:22.562 --rc genhtml_function_coverage=1 00:32:22.562 --rc genhtml_legend=1 00:32:22.562 --rc geninfo_all_blocks=1 00:32:22.562 --rc geninfo_unexecuted_blocks=1 00:32:22.562 00:32:22.562 ' 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:22.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:22.562 07:28:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:22.563 07:28:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:22.563 07:28:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:22.563 07:28:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:22.563 07:28:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1432986 00:32:22.563 07:28:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1432986 00:32:22.563 07:28:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # '[' -z 1432986 ']' 00:32:22.563 07:28:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:22.563 07:28:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:22.563 07:28:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:22.563 07:28:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:22.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:22.563 07:28:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:22.563 07:28:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:22.563 [2024-11-20 07:28:26.983506] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:32:22.563 [2024-11-20 07:28:26.983558] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1432986 ] 00:32:22.563 [2024-11-20 07:28:27.056293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:22.563 [2024-11-20 07:28:27.099739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:22.563 [2024-11-20 07:28:27.099743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:22.822 07:28:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:22.822 07:28:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@866 -- # return 0 00:32:22.822 07:28:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:22.822 07:28:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:22.822 07:28:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:22.822 07:28:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:22.822 07:28:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:32:22.822 07:28:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:32:22.822 07:28:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:22.822 07:28:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:22.822 07:28:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:22.822 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:22.822 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:22.822 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:32:22.822 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:32:22.822 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:32:22.822 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:32:22.822 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:22.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:32:22.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:32:22.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:22.822 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:22.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:32:22.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:22.822 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:22.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:32:22.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:22.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:22.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:22.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:22.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:32:22.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:32:22.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:22.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:32:22.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:22.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:32:22.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:32:22.822 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:32:22.822 ' 00:32:26.114 [2024-11-20 07:28:29.922003] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:27.050 [2024-11-20 07:28:31.262517] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:32:29.585 [2024-11-20 07:28:33.746119] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:32:31.490 [2024-11-20 07:28:35.928983] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:32:33.396 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:32:33.396 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:32:33.396 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:32:33.396 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:32:33.396 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:32:33.396 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:32:33.396 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:32:33.396 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:33.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:32:33.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:32:33.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:33.396 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:33.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:32:33.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:33.396 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:33.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:32:33.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:33.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:33.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:33.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:33.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:32:33.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:32:33.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:33.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:32:33.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:33.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:32:33.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:32:33.396 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:32:33.396 07:28:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:32:33.396 07:28:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:33.396 07:28:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:33.396 07:28:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:32:33.396 07:28:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:33.396 07:28:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:33.396 07:28:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:32:33.396 07:28:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:32:33.656 07:28:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:32:33.656 07:28:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:32:33.656 07:28:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:32:33.656 07:28:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:33.656 07:28:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:33.915 07:28:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:32:33.915 07:28:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:33.915 07:28:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:33.915 07:28:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:32:33.915 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:32:33.915 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:33.915 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:32:33.915 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:32:33.915 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:32:33.915 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:32:33.915 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:33.915 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:32:33.915 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:32:33.915 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:32:33.915 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:32:33.915 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:32:33.915 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:32:33.915 ' 00:32:39.191 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:32:39.191 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:32:39.191 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:39.191 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:32:39.191 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:32:39.191 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:32:39.191 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:32:39.191 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:39.191 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:32:39.191 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:32:39.191 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:32:39.191 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:32:39.191 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:32:39.191 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:32:39.451 07:28:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:32:39.451 07:28:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:39.451 07:28:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:39.451 07:28:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1432986 00:32:39.451 07:28:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 1432986 ']' 00:32:39.451 07:28:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 1432986 00:32:39.451 07:28:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # uname 00:32:39.451 07:28:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:39.451 07:28:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1432986 00:32:39.451 07:28:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:39.451 07:28:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:39.451 07:28:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1432986' 00:32:39.451 killing process with pid 1432986 00:32:39.451 07:28:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # kill 1432986 00:32:39.451 07:28:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # wait 1432986 00:32:39.711 07:28:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:32:39.711 07:28:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:32:39.711 07:28:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1432986 ']' 00:32:39.711 07:28:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1432986 00:32:39.711 07:28:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 1432986 ']' 00:32:39.711 07:28:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 1432986 00:32:39.711 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (1432986) - No such process 00:32:39.711 07:28:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@979 -- # echo 'Process with pid 1432986 is not found' 00:32:39.711 Process with pid 1432986 is not found 00:32:39.711 07:28:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:32:39.711 07:28:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:32:39.711 07:28:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:32:39.711 00:32:39.711 real 0m17.358s 00:32:39.711 user 0m38.268s 00:32:39.711 sys 0m0.814s 00:32:39.711 07:28:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:39.711 07:28:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:39.711 ************************************ 00:32:39.711 END TEST spdkcli_nvmf_tcp 00:32:39.711 ************************************ 00:32:39.711 07:28:44 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:39.711 07:28:44 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:32:39.711 07:28:44 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:39.711 07:28:44 -- common/autotest_common.sh@10 -- # set +x 00:32:39.711 ************************************ 00:32:39.711 START TEST nvmf_identify_passthru 00:32:39.711 ************************************ 00:32:39.711 07:28:44 nvmf_identify_passthru -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:39.711 * Looking for test storage... 00:32:39.711 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:39.711 07:28:44 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:39.711 07:28:44 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:32:39.711 07:28:44 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:39.972 07:28:44 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:39.972 07:28:44 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:39.972 07:28:44 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:39.972 07:28:44 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:39.972 07:28:44 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:32:39.972 07:28:44 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:32:39.972 07:28:44 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:32:39.972 07:28:44 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:32:39.972 07:28:44 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:32:39.972 07:28:44 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:32:39.972 07:28:44 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:32:39.972 07:28:44 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:39.972 07:28:44 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:32:39.972 07:28:44 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:32:39.972 07:28:44 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:39.972 07:28:44 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:39.972 07:28:44 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:32:39.972 07:28:44 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:32:39.972 07:28:44 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:39.972 07:28:44 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:32:39.972 07:28:44 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:32:39.972 07:28:44 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:32:39.972 07:28:44 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:32:39.972 07:28:44 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:39.972 07:28:44 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:32:39.972 07:28:44 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:32:39.972 07:28:44 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:39.972 07:28:44 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:39.972 07:28:44 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:32:39.972 07:28:44 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:39.972 07:28:44 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:39.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.972 --rc genhtml_branch_coverage=1 00:32:39.972 --rc genhtml_function_coverage=1 00:32:39.972 --rc genhtml_legend=1 00:32:39.972 --rc geninfo_all_blocks=1 00:32:39.972 --rc geninfo_unexecuted_blocks=1 00:32:39.972 00:32:39.972 ' 00:32:39.972 07:28:44 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:39.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.972 --rc genhtml_branch_coverage=1 00:32:39.972 --rc genhtml_function_coverage=1 00:32:39.972 --rc genhtml_legend=1 00:32:39.972 --rc geninfo_all_blocks=1 00:32:39.972 --rc geninfo_unexecuted_blocks=1 00:32:39.972 00:32:39.972 ' 00:32:39.972 07:28:44 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:39.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.972 --rc genhtml_branch_coverage=1 00:32:39.972 --rc genhtml_function_coverage=1 00:32:39.972 --rc genhtml_legend=1 00:32:39.972 --rc geninfo_all_blocks=1 00:32:39.972 --rc geninfo_unexecuted_blocks=1 00:32:39.972 00:32:39.972 ' 00:32:39.972 07:28:44 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:39.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.972 --rc genhtml_branch_coverage=1 00:32:39.972 --rc genhtml_function_coverage=1 00:32:39.972 --rc genhtml_legend=1 00:32:39.972 --rc geninfo_all_blocks=1 00:32:39.972 --rc geninfo_unexecuted_blocks=1 00:32:39.972 00:32:39.972 ' 00:32:39.972 07:28:44 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:39.972 07:28:44 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:32:39.972 07:28:44 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:39.972 07:28:44 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:39.972 07:28:44 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:39.972 07:28:44 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:39.972 07:28:44 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:39.972 07:28:44 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:39.972 07:28:44 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:39.972 07:28:44 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:39.972 07:28:44 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:39.972 07:28:44 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:39.972 07:28:44 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:39.972 07:28:44 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:39.972 07:28:44 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:39.972 07:28:44 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:39.972 07:28:44 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:39.972 07:28:44 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:39.972 07:28:44 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:39.972 07:28:44 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:39.972 07:28:44 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:39.972 07:28:44 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:39.972 07:28:44 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:39.972 07:28:44 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.972 07:28:44 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.972 07:28:44 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.972 07:28:44 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:39.972 07:28:44 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.972 07:28:44 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:32:39.972 07:28:44 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:39.972 07:28:44 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:39.972 07:28:44 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:39.972 07:28:44 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:39.972 07:28:44 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:39.972 07:28:44 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:39.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:39.973 07:28:44 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:39.973 07:28:44 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:39.973 07:28:44 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:39.973 07:28:44 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:39.973 07:28:44 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:39.973 07:28:44 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:39.973 07:28:44 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:39.973 07:28:44 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:39.973 07:28:44 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.973 07:28:44 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.973 07:28:44 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.973 07:28:44 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:39.973 07:28:44 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.973 07:28:44 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:32:39.973 07:28:44 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:39.973 07:28:44 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:39.973 07:28:44 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:39.973 07:28:44 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:39.973 07:28:44 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:39.973 07:28:44 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:39.973 07:28:44 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:39.973 07:28:44 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:39.973 07:28:44 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:39.973 07:28:44 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:39.973 07:28:44 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:32:39.973 07:28:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:46.545 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:46.545 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:46.545 07:28:49 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:46.545 Found net devices under 0000:86:00.0: cvl_0_0 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:46.545 Found net devices under 0000:86:00.1: cvl_0_1 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:46.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:46.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:32:46.545 00:32:46.545 --- 10.0.0.2 ping statistics --- 00:32:46.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:46.545 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:46.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:46.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:32:46.545 00:32:46.545 --- 10.0.0.1 ping statistics --- 00:32:46.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:46.545 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:46.545 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:46.546 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:46.546 07:28:50 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:46.546 07:28:50 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:32:46.546 07:28:50 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:46.546 07:28:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:46.546 07:28:50 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:32:46.546 07:28:50 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:32:46.546 07:28:50 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:32:46.546 07:28:50 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:32:46.546 07:28:50 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:32:46.546 07:28:50 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:32:46.546 07:28:50 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:32:46.546 07:28:50 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:46.546 07:28:50 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:46.546 07:28:50 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:32:46.546 07:28:50 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:32:46.546 07:28:50 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:32:46.546 07:28:50 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:5e:00.0 00:32:46.546 07:28:50 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:32:46.546 07:28:50 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:32:46.546 07:28:50 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:32:46.546 07:28:50 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:32:46.546 07:28:50 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:32:50.835 07:28:54 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:32:50.835 07:28:54 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:32:50.835 07:28:54 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:32:50.835 07:28:54 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:32:54.121 07:28:58 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:32:54.121 07:28:58 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:32:54.121 07:28:58 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:54.121 07:28:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:54.380 07:28:58 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:32:54.380 07:28:58 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:54.380 07:28:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:54.380 07:28:58 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1440151 00:32:54.380 07:28:58 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:32:54.380 07:28:58 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:54.380 07:28:58 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1440151 00:32:54.380 07:28:58 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # '[' -z 1440151 ']' 00:32:54.380 07:28:58 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:54.380 07:28:58 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:54.380 07:28:58 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:54.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:54.380 07:28:58 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:54.380 07:28:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:54.380 [2024-11-20 07:28:58.738754] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:32:54.380 [2024-11-20 07:28:58.738803] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:54.380 [2024-11-20 07:28:58.799762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:54.380 [2024-11-20 07:28:58.843569] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:54.380 [2024-11-20 07:28:58.843608] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:54.380 [2024-11-20 07:28:58.843616] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:54.380 [2024-11-20 07:28:58.843622] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:54.380 [2024-11-20 07:28:58.843627] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:54.380 [2024-11-20 07:28:58.845258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:54.380 [2024-11-20 07:28:58.845368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:54.380 [2024-11-20 07:28:58.845473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:54.380 [2024-11-20 07:28:58.845475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:54.380 07:28:58 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:54.380 07:28:58 nvmf_identify_passthru -- common/autotest_common.sh@866 -- # return 0 00:32:54.380 07:28:58 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:32:54.380 07:28:58 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:54.380 07:28:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:54.380 INFO: Log level set to 20 00:32:54.380 INFO: Requests: 00:32:54.380 { 00:32:54.380 "jsonrpc": "2.0", 00:32:54.380 "method": "nvmf_set_config", 00:32:54.380 "id": 1, 00:32:54.380 "params": { 00:32:54.380 "admin_cmd_passthru": { 00:32:54.380 "identify_ctrlr": true 00:32:54.380 } 00:32:54.380 } 00:32:54.380 } 00:32:54.380 00:32:54.380 INFO: response: 00:32:54.380 { 00:32:54.380 "jsonrpc": "2.0", 00:32:54.380 "id": 1, 00:32:54.380 "result": true 00:32:54.380 } 00:32:54.380 00:32:54.380 07:28:58 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:54.380 07:28:58 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:32:54.380 07:28:58 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:54.380 07:28:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:54.380 INFO: Setting log level to 20 00:32:54.380 INFO: Setting log level to 20 00:32:54.380 INFO: Log level set to 20 00:32:54.380 INFO: Log level set to 20 00:32:54.380 INFO: Requests: 00:32:54.380 { 00:32:54.380 "jsonrpc": "2.0", 00:32:54.380 "method": "framework_start_init", 00:32:54.380 "id": 1 00:32:54.380 } 00:32:54.380 00:32:54.380 INFO: Requests: 00:32:54.380 { 00:32:54.380 "jsonrpc": "2.0", 00:32:54.380 "method": "framework_start_init", 00:32:54.380 "id": 1 00:32:54.380 } 00:32:54.380 00:32:54.638 [2024-11-20 07:28:58.967487] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:32:54.638 INFO: response: 00:32:54.638 { 00:32:54.638 "jsonrpc": "2.0", 00:32:54.638 "id": 1, 00:32:54.638 "result": true 00:32:54.638 } 00:32:54.638 00:32:54.638 INFO: response: 00:32:54.638 { 00:32:54.638 "jsonrpc": "2.0", 00:32:54.638 "id": 1, 00:32:54.638 "result": true 00:32:54.638 } 00:32:54.638 00:32:54.638 07:28:58 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:54.638 07:28:58 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:54.638 07:28:58 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:54.638 07:28:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:54.638 INFO: Setting log level to 40 00:32:54.638 INFO: Setting log level to 40 00:32:54.638 INFO: Setting log level to 40 00:32:54.638 [2024-11-20 07:28:58.980837] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:54.638 07:28:58 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:54.638 07:28:58 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:32:54.638 07:28:58 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:54.639 07:28:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:54.639 07:28:59 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:32:54.639 07:28:59 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:54.639 07:28:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:57.920 Nvme0n1 00:32:57.920 07:29:01 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.920 07:29:01 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:32:57.920 07:29:01 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.920 07:29:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:57.920 07:29:01 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.920 07:29:01 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:32:57.920 07:29:01 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.920 07:29:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:57.920 07:29:01 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.920 07:29:01 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:57.920 07:29:01 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.920 07:29:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:57.920 [2024-11-20 07:29:01.890613] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:57.920 07:29:01 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.920 07:29:01 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:32:57.920 07:29:01 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.920 07:29:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:57.920 [ 00:32:57.920 { 00:32:57.920 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:57.920 "subtype": "Discovery", 00:32:57.920 "listen_addresses": [], 00:32:57.920 "allow_any_host": true, 00:32:57.920 "hosts": [] 00:32:57.920 }, 00:32:57.920 { 00:32:57.920 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:57.920 "subtype": "NVMe", 00:32:57.920 "listen_addresses": [ 00:32:57.920 { 00:32:57.920 "trtype": "TCP", 00:32:57.920 "adrfam": "IPv4", 00:32:57.920 "traddr": "10.0.0.2", 00:32:57.920 "trsvcid": "4420" 00:32:57.920 } 00:32:57.920 ], 00:32:57.920 "allow_any_host": true, 00:32:57.920 "hosts": [], 00:32:57.920 "serial_number": "SPDK00000000000001", 00:32:57.920 "model_number": "SPDK bdev Controller", 00:32:57.920 "max_namespaces": 1, 00:32:57.920 "min_cntlid": 1, 00:32:57.920 "max_cntlid": 65519, 00:32:57.920 "namespaces": [ 00:32:57.920 { 00:32:57.920 "nsid": 1, 00:32:57.920 "bdev_name": "Nvme0n1", 00:32:57.920 "name": "Nvme0n1", 00:32:57.920 "nguid": "AA2A4867D96B48F3A4B0E89B37EDB0F8", 00:32:57.920 "uuid": "aa2a4867-d96b-48f3-a4b0-e89b37edb0f8" 00:32:57.920 } 00:32:57.920 ] 00:32:57.920 } 00:32:57.920 ] 00:32:57.920 07:29:01 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.920 07:29:01 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:57.920 07:29:01 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:32:57.920 07:29:01 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:32:57.920 07:29:02 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:32:57.920 07:29:02 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:57.920 07:29:02 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:32:57.920 07:29:02 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:32:57.920 07:29:02 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:32:57.920 07:29:02 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:32:57.920 07:29:02 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:32:57.920 07:29:02 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:57.920 07:29:02 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:57.920 07:29:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:57.920 07:29:02 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:57.920 07:29:02 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:32:57.920 07:29:02 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:32:57.920 07:29:02 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:57.920 07:29:02 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:32:57.920 07:29:02 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:57.920 07:29:02 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:32:57.920 07:29:02 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:57.920 07:29:02 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:57.920 rmmod nvme_tcp 00:32:57.920 rmmod nvme_fabrics 00:32:57.920 rmmod nvme_keyring 00:32:57.920 07:29:02 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:57.920 07:29:02 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:32:57.920 07:29:02 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:32:57.920 07:29:02 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 1440151 ']' 00:32:57.920 07:29:02 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 1440151 00:32:57.920 07:29:02 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' -z 1440151 ']' 00:32:57.920 07:29:02 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # kill -0 1440151 00:32:57.920 07:29:02 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # uname 00:32:57.920 07:29:02 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:57.920 07:29:02 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1440151 00:32:57.920 07:29:02 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:57.920 07:29:02 nvmf_identify_passthru -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:57.920 07:29:02 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1440151' 00:32:57.920 killing process with pid 1440151 00:32:57.920 07:29:02 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # kill 1440151 00:32:57.920 07:29:02 nvmf_identify_passthru -- common/autotest_common.sh@976 -- # wait 1440151 00:32:59.296 07:29:03 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:59.296 07:29:03 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:59.296 07:29:03 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:59.296 07:29:03 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:32:59.296 07:29:03 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:32:59.296 07:29:03 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:59.296 07:29:03 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:32:59.296 07:29:03 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:59.296 07:29:03 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:59.296 07:29:03 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:59.296 07:29:03 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:59.296 07:29:03 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:01.834 07:29:05 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:01.834 00:33:01.834 real 0m21.734s 00:33:01.834 user 0m26.490s 00:33:01.834 sys 0m6.182s 00:33:01.834 07:29:05 nvmf_identify_passthru -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:01.834 07:29:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:01.834 ************************************ 00:33:01.834 END TEST nvmf_identify_passthru 00:33:01.834 ************************************ 00:33:01.834 07:29:05 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:01.834 07:29:05 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:33:01.834 07:29:05 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:01.834 07:29:05 -- common/autotest_common.sh@10 -- # set +x 00:33:01.834 ************************************ 00:33:01.834 START TEST nvmf_dif 00:33:01.834 ************************************ 00:33:01.834 07:29:05 nvmf_dif -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:01.834 * Looking for test storage... 00:33:01.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:01.834 07:29:06 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:01.834 07:29:06 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:33:01.834 07:29:06 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:01.834 07:29:06 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:01.834 07:29:06 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:01.834 07:29:06 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:01.834 07:29:06 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:01.834 07:29:06 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:33:01.834 07:29:06 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:33:01.834 07:29:06 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:33:01.834 07:29:06 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:33:01.834 07:29:06 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:33:01.834 07:29:06 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:33:01.834 07:29:06 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:33:01.834 07:29:06 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:01.834 07:29:06 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:33:01.834 07:29:06 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:33:01.834 07:29:06 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:01.834 07:29:06 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:01.834 07:29:06 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:33:01.834 07:29:06 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:33:01.834 07:29:06 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:01.834 07:29:06 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:33:01.834 07:29:06 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:33:01.834 07:29:06 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:33:01.834 07:29:06 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:33:01.834 07:29:06 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:01.834 07:29:06 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:33:01.834 07:29:06 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:33:01.834 07:29:06 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:01.834 07:29:06 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:01.834 07:29:06 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:33:01.834 07:29:06 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:01.834 07:29:06 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:01.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.834 --rc genhtml_branch_coverage=1 00:33:01.834 --rc genhtml_function_coverage=1 00:33:01.834 --rc genhtml_legend=1 00:33:01.834 --rc geninfo_all_blocks=1 00:33:01.834 --rc geninfo_unexecuted_blocks=1 00:33:01.834 00:33:01.834 ' 00:33:01.834 07:29:06 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:01.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.834 --rc genhtml_branch_coverage=1 00:33:01.834 --rc genhtml_function_coverage=1 00:33:01.834 --rc genhtml_legend=1 00:33:01.834 --rc geninfo_all_blocks=1 00:33:01.834 --rc geninfo_unexecuted_blocks=1 00:33:01.834 00:33:01.834 ' 00:33:01.834 07:29:06 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:01.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.834 --rc genhtml_branch_coverage=1 00:33:01.834 --rc genhtml_function_coverage=1 00:33:01.834 --rc genhtml_legend=1 00:33:01.834 --rc geninfo_all_blocks=1 00:33:01.834 --rc geninfo_unexecuted_blocks=1 00:33:01.834 00:33:01.834 ' 00:33:01.834 07:29:06 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:01.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:01.834 --rc genhtml_branch_coverage=1 00:33:01.834 --rc genhtml_function_coverage=1 00:33:01.834 --rc genhtml_legend=1 00:33:01.834 --rc geninfo_all_blocks=1 00:33:01.834 --rc geninfo_unexecuted_blocks=1 00:33:01.834 00:33:01.834 ' 00:33:01.834 07:29:06 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:01.834 07:29:06 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:33:01.834 07:29:06 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:01.835 07:29:06 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:01.835 07:29:06 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:01.835 07:29:06 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:01.835 07:29:06 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:01.835 07:29:06 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:01.835 07:29:06 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:01.835 07:29:06 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:01.835 07:29:06 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:01.835 07:29:06 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:01.835 07:29:06 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:01.835 07:29:06 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:01.835 07:29:06 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:01.835 07:29:06 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:01.835 07:29:06 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:01.835 07:29:06 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:01.835 07:29:06 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:01.835 07:29:06 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:33:01.835 07:29:06 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:01.835 07:29:06 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:01.835 07:29:06 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:01.835 07:29:06 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.835 07:29:06 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.835 07:29:06 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.835 07:29:06 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:33:01.835 07:29:06 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.835 07:29:06 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:33:01.835 07:29:06 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:01.835 07:29:06 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:01.835 07:29:06 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:01.835 07:29:06 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:01.835 07:29:06 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:01.835 07:29:06 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:01.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:01.835 07:29:06 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:01.835 07:29:06 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:01.835 07:29:06 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:01.835 07:29:06 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:33:01.835 07:29:06 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:33:01.835 07:29:06 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:33:01.835 07:29:06 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:33:01.835 07:29:06 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:33:01.835 07:29:06 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:01.835 07:29:06 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:01.835 07:29:06 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:01.835 07:29:06 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:01.835 07:29:06 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:01.835 07:29:06 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:01.835 07:29:06 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:01.835 07:29:06 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:01.835 07:29:06 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:01.835 07:29:06 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:01.835 07:29:06 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:33:01.835 07:29:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:08.407 07:29:11 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:08.407 07:29:11 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:33:08.407 07:29:11 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:08.407 07:29:11 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:08.407 07:29:11 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:08.408 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:08.408 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:08.408 Found net devices under 0000:86:00.0: cvl_0_0 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:08.408 Found net devices under 0000:86:00.1: cvl_0_1 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:08.408 07:29:11 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:08.408 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:08.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.481 ms 00:33:08.408 00:33:08.408 --- 10.0.0.2 ping statistics --- 00:33:08.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:08.408 rtt min/avg/max/mdev = 0.481/0.481/0.481/0.000 ms 00:33:08.408 07:29:12 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:08.408 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:08.408 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:33:08.408 00:33:08.408 --- 10.0.0.1 ping statistics --- 00:33:08.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:08.408 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:33:08.408 07:29:12 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:08.408 07:29:12 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:33:08.408 07:29:12 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:33:08.408 07:29:12 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:10.309 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:33:10.309 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:10.309 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:33:10.309 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:33:10.309 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:33:10.309 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:33:10.309 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:33:10.309 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:33:10.309 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:33:10.309 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:33:10.309 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:33:10.309 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:33:10.309 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:33:10.309 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:33:10.309 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:33:10.309 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:33:10.309 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:33:10.568 07:29:14 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:10.568 07:29:14 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:10.568 07:29:14 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:10.568 07:29:14 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:10.568 07:29:14 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:10.568 07:29:14 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:10.568 07:29:14 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:10.568 07:29:14 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:33:10.568 07:29:14 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:10.568 07:29:14 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:10.568 07:29:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:10.568 07:29:14 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=1445673 00:33:10.568 07:29:14 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 1445673 00:33:10.568 07:29:14 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:10.568 07:29:14 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 1445673 ']' 00:33:10.568 07:29:14 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:10.568 07:29:14 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:10.568 07:29:14 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:10.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:10.568 07:29:14 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:10.568 07:29:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:10.568 [2024-11-20 07:29:14.992168] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:33:10.568 [2024-11-20 07:29:14.992219] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:10.568 [2024-11-20 07:29:15.072242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:10.568 [2024-11-20 07:29:15.113788] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:10.568 [2024-11-20 07:29:15.113823] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:10.568 [2024-11-20 07:29:15.113831] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:10.568 [2024-11-20 07:29:15.113837] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:10.568 [2024-11-20 07:29:15.113842] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:10.568 [2024-11-20 07:29:15.114416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:10.827 07:29:15 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:10.827 07:29:15 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:33:10.827 07:29:15 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:10.827 07:29:15 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:10.827 07:29:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:10.827 07:29:15 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:10.827 07:29:15 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:33:10.827 07:29:15 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:10.827 07:29:15 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.827 07:29:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:10.827 [2024-11-20 07:29:15.251526] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:10.827 07:29:15 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.827 07:29:15 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:10.827 07:29:15 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:33:10.827 07:29:15 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:10.827 07:29:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:10.827 ************************************ 00:33:10.827 START TEST fio_dif_1_default 00:33:10.827 ************************************ 00:33:10.827 07:29:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:33:10.827 07:29:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:33:10.827 07:29:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:33:10.827 07:29:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:33:10.827 07:29:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:33:10.827 07:29:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:33:10.827 07:29:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:10.827 07:29:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.827 07:29:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:10.827 bdev_null0 00:33:10.827 07:29:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.827 07:29:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:10.827 07:29:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.827 07:29:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:10.827 07:29:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.827 07:29:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:10.827 07:29:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.827 07:29:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:10.827 07:29:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.828 07:29:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:10.828 07:29:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:10.828 07:29:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:10.828 [2024-11-20 07:29:15.327856] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:10.828 07:29:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:10.828 07:29:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:10.828 07:29:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:10.828 07:29:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:10.828 07:29:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:33:10.828 07:29:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:10.828 07:29:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:33:10.828 07:29:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:10.828 07:29:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:10.828 07:29:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:33:10.828 07:29:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:10.828 { 00:33:10.828 "params": { 00:33:10.828 "name": "Nvme$subsystem", 00:33:10.828 "trtype": "$TEST_TRANSPORT", 00:33:10.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:10.828 "adrfam": "ipv4", 00:33:10.828 "trsvcid": "$NVMF_PORT", 00:33:10.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:10.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:10.828 "hdgst": ${hdgst:-false}, 00:33:10.828 "ddgst": ${ddgst:-false} 00:33:10.828 }, 00:33:10.828 "method": "bdev_nvme_attach_controller" 00:33:10.828 } 00:33:10.828 EOF 00:33:10.828 )") 00:33:10.828 07:29:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:33:10.828 07:29:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:33:10.828 07:29:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:10.828 07:29:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:33:10.828 07:29:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:33:10.828 07:29:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:10.828 07:29:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:33:10.828 07:29:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:33:10.828 07:29:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:33:10.828 07:29:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:33:10.828 07:29:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:10.828 07:29:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:33:10.828 07:29:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:33:10.828 07:29:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:33:10.828 07:29:15 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:33:10.828 07:29:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:33:10.828 07:29:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:33:10.828 07:29:15 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:10.828 "params": { 00:33:10.828 "name": "Nvme0", 00:33:10.828 "trtype": "tcp", 00:33:10.828 "traddr": "10.0.0.2", 00:33:10.828 "adrfam": "ipv4", 00:33:10.828 "trsvcid": "4420", 00:33:10.828 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:10.828 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:10.828 "hdgst": false, 00:33:10.828 "ddgst": false 00:33:10.828 }, 00:33:10.828 "method": "bdev_nvme_attach_controller" 00:33:10.828 }' 00:33:10.828 07:29:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:33:10.828 07:29:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:33:10.828 07:29:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:33:10.828 07:29:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:10.828 07:29:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:33:10.828 07:29:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:33:11.111 07:29:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:33:11.111 07:29:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:33:11.111 07:29:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:11.111 07:29:15 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:11.372 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:11.372 fio-3.35 00:33:11.372 Starting 1 thread 00:33:23.572 00:33:23.572 filename0: (groupid=0, jobs=1): err= 0: pid=1445895: Wed Nov 20 07:29:26 2024 00:33:23.572 read: IOPS=204, BW=816KiB/s (836kB/s)(8176KiB/10018msec) 00:33:23.572 slat (nsec): min=5949, max=27786, avg=6295.65, stdev=1086.62 00:33:23.572 clat (usec): min=376, max=42553, avg=19586.21, stdev=20320.87 00:33:23.572 lat (usec): min=382, max=42560, avg=19592.51, stdev=20320.81 00:33:23.572 clat percentiles (usec): 00:33:23.572 | 1.00th=[ 388], 5.00th=[ 396], 10.00th=[ 404], 20.00th=[ 412], 00:33:23.572 | 30.00th=[ 420], 40.00th=[ 478], 50.00th=[ 603], 60.00th=[40633], 00:33:23.572 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:33:23.572 | 99.00th=[41681], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:33:23.572 | 99.99th=[42730] 00:33:23.572 bw ( KiB/s): min= 768, max= 960, per=99.98%, avg=816.00, stdev=54.44, samples=20 00:33:23.572 iops : min= 192, max= 240, avg=204.00, stdev=13.61, samples=20 00:33:23.572 lat (usec) : 500=41.19%, 750=11.64% 00:33:23.572 lat (msec) : 10=0.20%, 50=46.97% 00:33:23.572 cpu : usr=92.43%, sys=7.32%, ctx=8, majf=0, minf=0 00:33:23.572 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:23.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:23.572 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:23.572 issued rwts: total=2044,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:23.572 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:23.572 00:33:23.572 Run status group 0 (all jobs): 00:33:23.572 READ: bw=816KiB/s (836kB/s), 816KiB/s-816KiB/s (836kB/s-836kB/s), io=8176KiB (8372kB), run=10018-10018msec 00:33:23.572 07:29:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:33:23.572 07:29:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:33:23.572 07:29:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:33:23.572 07:29:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:23.572 07:29:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:33:23.572 07:29:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:23.572 07:29:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.572 07:29:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:23.572 07:29:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.572 07:29:26 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:23.572 07:29:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.572 07:29:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:23.572 07:29:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.572 00:33:23.572 real 0m11.120s 00:33:23.572 user 0m16.196s 00:33:23.572 sys 0m1.044s 00:33:23.572 07:29:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:23.572 07:29:26 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:23.572 ************************************ 00:33:23.572 END TEST fio_dif_1_default 00:33:23.572 ************************************ 00:33:23.572 07:29:26 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:33:23.572 07:29:26 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:33:23.572 07:29:26 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:23.572 07:29:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:23.572 ************************************ 00:33:23.572 START TEST fio_dif_1_multi_subsystems 00:33:23.572 ************************************ 00:33:23.572 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:23.573 bdev_null0 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:23.573 [2024-11-20 07:29:26.516854] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:23.573 bdev_null1 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:23.573 { 00:33:23.573 "params": { 00:33:23.573 "name": "Nvme$subsystem", 00:33:23.573 "trtype": "$TEST_TRANSPORT", 00:33:23.573 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:23.573 "adrfam": "ipv4", 00:33:23.573 "trsvcid": "$NVMF_PORT", 00:33:23.573 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:23.573 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:23.573 "hdgst": ${hdgst:-false}, 00:33:23.573 "ddgst": ${ddgst:-false} 00:33:23.573 }, 00:33:23.573 "method": "bdev_nvme_attach_controller" 00:33:23.573 } 00:33:23.573 EOF 00:33:23.573 )") 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:23.573 { 00:33:23.573 "params": { 00:33:23.573 "name": "Nvme$subsystem", 00:33:23.573 "trtype": "$TEST_TRANSPORT", 00:33:23.573 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:23.573 "adrfam": "ipv4", 00:33:23.573 "trsvcid": "$NVMF_PORT", 00:33:23.573 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:23.573 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:23.573 "hdgst": ${hdgst:-false}, 00:33:23.573 "ddgst": ${ddgst:-false} 00:33:23.573 }, 00:33:23.573 "method": "bdev_nvme_attach_controller" 00:33:23.573 } 00:33:23.573 EOF 00:33:23.573 )") 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:33:23.573 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:23.573 "params": { 00:33:23.573 "name": "Nvme0", 00:33:23.573 "trtype": "tcp", 00:33:23.573 "traddr": "10.0.0.2", 00:33:23.573 "adrfam": "ipv4", 00:33:23.573 "trsvcid": "4420", 00:33:23.573 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:23.574 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:23.574 "hdgst": false, 00:33:23.574 "ddgst": false 00:33:23.574 }, 00:33:23.574 "method": "bdev_nvme_attach_controller" 00:33:23.574 },{ 00:33:23.574 "params": { 00:33:23.574 "name": "Nvme1", 00:33:23.574 "trtype": "tcp", 00:33:23.574 "traddr": "10.0.0.2", 00:33:23.574 "adrfam": "ipv4", 00:33:23.574 "trsvcid": "4420", 00:33:23.574 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:23.574 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:23.574 "hdgst": false, 00:33:23.574 "ddgst": false 00:33:23.574 }, 00:33:23.574 "method": "bdev_nvme_attach_controller" 00:33:23.574 }' 00:33:23.574 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:33:23.574 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:33:23.574 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:33:23.574 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:33:23.574 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:23.574 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:33:23.574 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:33:23.574 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:33:23.574 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:23.574 07:29:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:23.574 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:23.574 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:23.574 fio-3.35 00:33:23.574 Starting 2 threads 00:33:33.548 00:33:33.548 filename0: (groupid=0, jobs=1): err= 0: pid=1447856: Wed Nov 20 07:29:37 2024 00:33:33.548 read: IOPS=149, BW=597KiB/s (611kB/s)(5984KiB/10024msec) 00:33:33.548 slat (nsec): min=6139, max=36641, avg=7853.99, stdev=3283.38 00:33:33.548 clat (usec): min=381, max=42573, avg=26777.70, stdev=19565.06 00:33:33.548 lat (usec): min=387, max=42580, avg=26785.56, stdev=19565.25 00:33:33.548 clat percentiles (usec): 00:33:33.548 | 1.00th=[ 396], 5.00th=[ 408], 10.00th=[ 416], 20.00th=[ 429], 00:33:33.548 | 30.00th=[ 570], 40.00th=[40633], 50.00th=[40633], 60.00th=[41157], 00:33:33.548 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:33:33.548 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:33:33.548 | 99.99th=[42730] 00:33:33.548 bw ( KiB/s): min= 384, max= 896, per=60.40%, avg=596.80, stdev=215.86, samples=20 00:33:33.548 iops : min= 96, max= 224, avg=149.20, stdev=53.96, samples=20 00:33:33.548 lat (usec) : 500=27.67%, 750=7.62%, 1000=0.27% 00:33:33.548 lat (msec) : 50=64.44% 00:33:33.548 cpu : usr=97.11%, sys=2.62%, ctx=13, majf=0, minf=57 00:33:33.548 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:33.548 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:33.548 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:33.548 issued rwts: total=1496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:33.548 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:33.548 filename1: (groupid=0, jobs=1): err= 0: pid=1447857: Wed Nov 20 07:29:37 2024 00:33:33.548 read: IOPS=97, BW=391KiB/s (400kB/s)(3920KiB/10037msec) 00:33:33.548 slat (nsec): min=6075, max=42658, avg=9662.46, stdev=6503.82 00:33:33.548 clat (usec): min=425, max=42061, avg=40935.22, stdev=3703.00 00:33:33.548 lat (usec): min=431, max=42088, avg=40944.88, stdev=3702.93 00:33:33.548 clat percentiles (usec): 00:33:33.548 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:33:33.548 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:33.548 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:33.548 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:33.548 | 99.99th=[42206] 00:33:33.548 bw ( KiB/s): min= 384, max= 416, per=39.52%, avg=390.40, stdev=13.13, samples=20 00:33:33.548 iops : min= 96, max= 104, avg=97.60, stdev= 3.28, samples=20 00:33:33.548 lat (usec) : 500=0.82% 00:33:33.548 lat (msec) : 50=99.18% 00:33:33.548 cpu : usr=97.93%, sys=1.80%, ctx=11, majf=0, minf=109 00:33:33.548 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:33.548 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:33.548 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:33.548 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:33.548 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:33.548 00:33:33.548 Run status group 0 (all jobs): 00:33:33.548 READ: bw=987KiB/s (1010kB/s), 391KiB/s-597KiB/s (400kB/s-611kB/s), io=9904KiB (10.1MB), run=10024-10037msec 00:33:33.548 07:29:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:33:33.548 07:29:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:33:33.548 07:29:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:33.548 07:29:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:33.548 07:29:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:33:33.548 07:29:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:33.548 07:29:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.548 07:29:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:33.548 07:29:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.548 07:29:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:33.548 07:29:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.548 07:29:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:33.548 07:29:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.548 07:29:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:33.548 07:29:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:33.548 07:29:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:33:33.548 07:29:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:33.548 07:29:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.548 07:29:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:33.548 07:29:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.548 07:29:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:33.548 07:29:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.548 07:29:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:33.548 07:29:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.548 00:33:33.548 real 0m11.553s 00:33:33.548 user 0m26.613s 00:33:33.548 sys 0m0.770s 00:33:33.548 07:29:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:33.549 07:29:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:33.549 ************************************ 00:33:33.549 END TEST fio_dif_1_multi_subsystems 00:33:33.549 ************************************ 00:33:33.549 07:29:38 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:33:33.549 07:29:38 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:33:33.549 07:29:38 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:33.549 07:29:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:33.807 ************************************ 00:33:33.807 START TEST fio_dif_rand_params 00:33:33.807 ************************************ 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:33.807 bdev_null0 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:33.807 [2024-11-20 07:29:38.147463] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:33.807 { 00:33:33.807 "params": { 00:33:33.807 "name": "Nvme$subsystem", 00:33:33.807 "trtype": "$TEST_TRANSPORT", 00:33:33.807 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:33.807 "adrfam": "ipv4", 00:33:33.807 "trsvcid": "$NVMF_PORT", 00:33:33.807 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:33.807 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:33.807 "hdgst": ${hdgst:-false}, 00:33:33.807 "ddgst": ${ddgst:-false} 00:33:33.807 }, 00:33:33.807 "method": "bdev_nvme_attach_controller" 00:33:33.807 } 00:33:33.807 EOF 00:33:33.807 )") 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:33.807 "params": { 00:33:33.807 "name": "Nvme0", 00:33:33.807 "trtype": "tcp", 00:33:33.807 "traddr": "10.0.0.2", 00:33:33.807 "adrfam": "ipv4", 00:33:33.807 "trsvcid": "4420", 00:33:33.807 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:33.807 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:33.807 "hdgst": false, 00:33:33.807 "ddgst": false 00:33:33.807 }, 00:33:33.807 "method": "bdev_nvme_attach_controller" 00:33:33.807 }' 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:33.807 07:29:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:34.065 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:34.065 ... 00:33:34.065 fio-3.35 00:33:34.065 Starting 3 threads 00:33:40.622 00:33:40.622 filename0: (groupid=0, jobs=1): err= 0: pid=1449818: Wed Nov 20 07:29:44 2024 00:33:40.622 read: IOPS=347, BW=43.4MiB/s (45.5MB/s)(219MiB/5045msec) 00:33:40.622 slat (nsec): min=6268, max=54547, avg=12990.78, stdev=6697.93 00:33:40.622 clat (usec): min=3325, max=52289, avg=8594.35, stdev=5098.15 00:33:40.622 lat (usec): min=3334, max=52311, avg=8607.34, stdev=5098.37 00:33:40.622 clat percentiles (usec): 00:33:40.622 | 1.00th=[ 3556], 5.00th=[ 5014], 10.00th=[ 5800], 20.00th=[ 6456], 00:33:40.622 | 30.00th=[ 7504], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 8717], 00:33:40.622 | 70.00th=[ 8979], 80.00th=[ 9372], 90.00th=[ 9765], 95.00th=[10159], 00:33:40.622 | 99.00th=[47449], 99.50th=[49021], 99.90th=[50070], 99.95th=[52167], 00:33:40.622 | 99.99th=[52167] 00:33:40.622 bw ( KiB/s): min=42240, max=57600, per=37.36%, avg=44825.60, stdev=4657.43, samples=10 00:33:40.622 iops : min= 330, max= 450, avg=350.20, stdev=36.39, samples=10 00:33:40.622 lat (msec) : 4=2.22%, 10=90.64%, 20=5.65%, 50=1.43%, 100=0.06% 00:33:40.622 cpu : usr=96.00%, sys=3.67%, ctx=34, majf=0, minf=11 00:33:40.622 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:40.622 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:40.622 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:40.622 issued rwts: total=1753,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:40.622 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:40.622 filename0: (groupid=0, jobs=1): err= 0: pid=1449819: Wed Nov 20 07:29:44 2024 00:33:40.622 read: IOPS=289, BW=36.2MiB/s (38.0MB/s)(183MiB/5043msec) 00:33:40.622 slat (nsec): min=6308, max=36537, avg=11596.92, stdev=3720.66 00:33:40.622 clat (usec): min=3292, max=91227, avg=10305.46, stdev=8898.70 00:33:40.622 lat (usec): min=3302, max=91249, avg=10317.06, stdev=8898.85 00:33:40.622 clat percentiles (usec): 00:33:40.622 | 1.00th=[ 3982], 5.00th=[ 5932], 10.00th=[ 6718], 20.00th=[ 7767], 00:33:40.622 | 30.00th=[ 8160], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 8848], 00:33:40.622 | 70.00th=[ 9110], 80.00th=[ 9372], 90.00th=[ 9765], 95.00th=[10814], 00:33:40.622 | 99.00th=[51119], 99.50th=[51119], 99.90th=[51643], 99.95th=[91751], 00:33:40.622 | 99.99th=[91751] 00:33:40.622 bw ( KiB/s): min=22272, max=45568, per=31.15%, avg=37376.00, stdev=7139.50, samples=10 00:33:40.622 iops : min= 174, max= 356, avg=292.00, stdev=55.78, samples=10 00:33:40.622 lat (msec) : 4=1.03%, 10=90.63%, 20=3.76%, 50=2.19%, 100=2.39% 00:33:40.622 cpu : usr=96.05%, sys=3.65%, ctx=9, majf=0, minf=9 00:33:40.622 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:40.622 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:40.622 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:40.622 issued rwts: total=1462,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:40.622 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:40.622 filename0: (groupid=0, jobs=1): err= 0: pid=1449820: Wed Nov 20 07:29:44 2024 00:33:40.622 read: IOPS=300, BW=37.5MiB/s (39.3MB/s)(189MiB/5043msec) 00:33:40.622 slat (nsec): min=6318, max=39012, avg=12652.84, stdev=4066.40 00:33:40.622 clat (usec): min=2962, max=50725, avg=9950.02, stdev=5564.21 00:33:40.622 lat (usec): min=2969, max=50734, avg=9962.67, stdev=5564.53 00:33:40.622 clat percentiles (usec): 00:33:40.622 | 1.00th=[ 3752], 5.00th=[ 5538], 10.00th=[ 6063], 20.00th=[ 6783], 00:33:40.622 | 30.00th=[ 8455], 40.00th=[ 9372], 50.00th=[ 9765], 60.00th=[10421], 00:33:40.622 | 70.00th=[10814], 80.00th=[11338], 90.00th=[11994], 95.00th=[12387], 00:33:40.622 | 99.00th=[47973], 99.50th=[49546], 99.90th=[50594], 99.95th=[50594], 00:33:40.622 | 99.99th=[50594] 00:33:40.622 bw ( KiB/s): min=35328, max=47872, per=32.26%, avg=38707.20, stdev=3785.19, samples=10 00:33:40.622 iops : min= 276, max= 374, avg=302.40, stdev=29.57, samples=10 00:33:40.622 lat (msec) : 4=1.92%, 10=52.31%, 20=44.06%, 50=1.39%, 100=0.33% 00:33:40.622 cpu : usr=95.74%, sys=3.97%, ctx=8, majf=0, minf=10 00:33:40.622 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:40.622 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:40.623 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:40.623 issued rwts: total=1514,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:40.623 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:40.623 00:33:40.623 Run status group 0 (all jobs): 00:33:40.623 READ: bw=117MiB/s (123MB/s), 36.2MiB/s-43.4MiB/s (38.0MB/s-45.5MB/s), io=591MiB (620MB), run=5043-5045msec 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.623 bdev_null0 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.623 [2024-11-20 07:29:44.443236] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.623 bdev_null1 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.623 bdev_null2 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:40.623 { 00:33:40.623 "params": { 00:33:40.623 "name": "Nvme$subsystem", 00:33:40.623 "trtype": "$TEST_TRANSPORT", 00:33:40.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:40.623 "adrfam": "ipv4", 00:33:40.623 "trsvcid": "$NVMF_PORT", 00:33:40.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:40.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:40.623 "hdgst": ${hdgst:-false}, 00:33:40.623 "ddgst": ${ddgst:-false} 00:33:40.623 }, 00:33:40.623 "method": "bdev_nvme_attach_controller" 00:33:40.623 } 00:33:40.623 EOF 00:33:40.623 )") 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:40.623 { 00:33:40.623 "params": { 00:33:40.623 "name": "Nvme$subsystem", 00:33:40.623 "trtype": "$TEST_TRANSPORT", 00:33:40.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:40.623 "adrfam": "ipv4", 00:33:40.623 "trsvcid": "$NVMF_PORT", 00:33:40.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:40.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:40.623 "hdgst": ${hdgst:-false}, 00:33:40.623 "ddgst": ${ddgst:-false} 00:33:40.623 }, 00:33:40.623 "method": "bdev_nvme_attach_controller" 00:33:40.623 } 00:33:40.623 EOF 00:33:40.623 )") 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:40.623 { 00:33:40.623 "params": { 00:33:40.623 "name": "Nvme$subsystem", 00:33:40.623 "trtype": "$TEST_TRANSPORT", 00:33:40.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:40.623 "adrfam": "ipv4", 00:33:40.623 "trsvcid": "$NVMF_PORT", 00:33:40.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:40.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:40.623 "hdgst": ${hdgst:-false}, 00:33:40.623 "ddgst": ${ddgst:-false} 00:33:40.623 }, 00:33:40.623 "method": "bdev_nvme_attach_controller" 00:33:40.623 } 00:33:40.623 EOF 00:33:40.623 )") 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:40.623 "params": { 00:33:40.623 "name": "Nvme0", 00:33:40.623 "trtype": "tcp", 00:33:40.623 "traddr": "10.0.0.2", 00:33:40.623 "adrfam": "ipv4", 00:33:40.623 "trsvcid": "4420", 00:33:40.623 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:40.623 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:40.623 "hdgst": false, 00:33:40.623 "ddgst": false 00:33:40.623 }, 00:33:40.623 "method": "bdev_nvme_attach_controller" 00:33:40.623 },{ 00:33:40.623 "params": { 00:33:40.623 "name": "Nvme1", 00:33:40.623 "trtype": "tcp", 00:33:40.623 "traddr": "10.0.0.2", 00:33:40.623 "adrfam": "ipv4", 00:33:40.623 "trsvcid": "4420", 00:33:40.623 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:40.623 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:40.623 "hdgst": false, 00:33:40.623 "ddgst": false 00:33:40.623 }, 00:33:40.623 "method": "bdev_nvme_attach_controller" 00:33:40.623 },{ 00:33:40.623 "params": { 00:33:40.623 "name": "Nvme2", 00:33:40.623 "trtype": "tcp", 00:33:40.623 "traddr": "10.0.0.2", 00:33:40.623 "adrfam": "ipv4", 00:33:40.623 "trsvcid": "4420", 00:33:40.623 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:33:40.623 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:33:40.623 "hdgst": false, 00:33:40.623 "ddgst": false 00:33:40.623 }, 00:33:40.623 "method": "bdev_nvme_attach_controller" 00:33:40.623 }' 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:40.623 07:29:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:40.623 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:40.623 ... 00:33:40.623 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:40.623 ... 00:33:40.623 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:40.623 ... 00:33:40.623 fio-3.35 00:33:40.623 Starting 24 threads 00:33:52.837 00:33:52.837 filename0: (groupid=0, jobs=1): err= 0: pid=1451067: Wed Nov 20 07:29:55 2024 00:33:52.837 read: IOPS=66, BW=267KiB/s (273kB/s)(2696KiB/10105msec) 00:33:52.837 slat (nsec): min=6851, max=42409, avg=9940.61, stdev=4670.91 00:33:52.837 clat (msec): min=72, max=404, avg=239.31, stdev=45.59 00:33:52.837 lat (msec): min=72, max=404, avg=239.32, stdev=45.59 00:33:52.837 clat percentiles (msec): 00:33:52.837 | 1.00th=[ 72], 5.00th=[ 178], 10.00th=[ 194], 20.00th=[ 226], 00:33:52.837 | 30.00th=[ 251], 40.00th=[ 251], 50.00th=[ 253], 60.00th=[ 255], 00:33:52.837 | 70.00th=[ 255], 80.00th=[ 257], 90.00th=[ 259], 95.00th=[ 262], 00:33:52.837 | 99.00th=[ 355], 99.50th=[ 405], 99.90th=[ 405], 99.95th=[ 405], 00:33:52.837 | 99.99th=[ 405] 00:33:52.837 bw ( KiB/s): min= 176, max= 384, per=4.40%, avg=263.20, stdev=52.29, samples=20 00:33:52.837 iops : min= 44, max= 96, avg=65.80, stdev=13.07, samples=20 00:33:52.837 lat (msec) : 100=4.45%, 250=29.38%, 500=66.17% 00:33:52.837 cpu : usr=98.81%, sys=0.82%, ctx=11, majf=0, minf=24 00:33:52.837 IO depths : 1=0.9%, 2=2.1%, 4=9.6%, 8=75.7%, 16=11.7%, 32=0.0%, >=64=0.0% 00:33:52.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.837 complete : 0=0.0%, 4=89.7%, 8=4.9%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.837 issued rwts: total=674,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.837 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.837 filename0: (groupid=0, jobs=1): err= 0: pid=1451068: Wed Nov 20 07:29:55 2024 00:33:52.837 read: IOPS=66, BW=267KiB/s (273kB/s)(2696KiB/10105msec) 00:33:52.837 slat (nsec): min=6829, max=36535, avg=9474.40, stdev=4314.71 00:33:52.837 clat (msec): min=72, max=403, avg=238.97, stdev=44.52 00:33:52.837 lat (msec): min=72, max=403, avg=238.98, stdev=44.52 00:33:52.837 clat percentiles (msec): 00:33:52.837 | 1.00th=[ 72], 5.00th=[ 184], 10.00th=[ 192], 20.00th=[ 226], 00:33:52.837 | 30.00th=[ 251], 40.00th=[ 253], 50.00th=[ 255], 60.00th=[ 255], 00:33:52.837 | 70.00th=[ 255], 80.00th=[ 257], 90.00th=[ 259], 95.00th=[ 264], 00:33:52.837 | 99.00th=[ 313], 99.50th=[ 405], 99.90th=[ 405], 99.95th=[ 405], 00:33:52.837 | 99.99th=[ 405] 00:33:52.837 bw ( KiB/s): min= 176, max= 384, per=4.40%, avg=263.20, stdev=37.24, samples=20 00:33:52.837 iops : min= 44, max= 96, avg=65.80, stdev= 9.31, samples=20 00:33:52.837 lat (msec) : 100=4.75%, 250=24.33%, 500=70.92% 00:33:52.837 cpu : usr=98.72%, sys=0.88%, ctx=13, majf=0, minf=22 00:33:52.838 IO depths : 1=0.6%, 2=1.5%, 4=8.8%, 8=77.2%, 16=12.0%, 32=0.0%, >=64=0.0% 00:33:52.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.838 complete : 0=0.0%, 4=89.4%, 8=5.2%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.838 issued rwts: total=674,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.838 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.838 filename0: (groupid=0, jobs=1): err= 0: pid=1451069: Wed Nov 20 07:29:55 2024 00:33:52.838 read: IOPS=66, BW=267KiB/s (273kB/s)(2688KiB/10080msec) 00:33:52.838 slat (nsec): min=6623, max=22829, avg=8805.69, stdev=2410.09 00:33:52.838 clat (msec): min=83, max=261, avg=239.91, stdev=27.86 00:33:52.838 lat (msec): min=83, max=261, avg=239.92, stdev=27.86 00:33:52.838 clat percentiles (msec): 00:33:52.838 | 1.00th=[ 165], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 205], 00:33:52.838 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 253], 60.00th=[ 255], 00:33:52.838 | 70.00th=[ 255], 80.00th=[ 257], 90.00th=[ 257], 95.00th=[ 259], 00:33:52.838 | 99.00th=[ 262], 99.50th=[ 262], 99.90th=[ 262], 99.95th=[ 262], 00:33:52.838 | 99.99th=[ 262] 00:33:52.838 bw ( KiB/s): min= 256, max= 368, per=4.39%, avg=262.40, stdev=25.11, samples=20 00:33:52.838 iops : min= 64, max= 92, avg=65.60, stdev= 6.28, samples=20 00:33:52.838 lat (msec) : 100=0.30%, 250=36.61%, 500=63.10% 00:33:52.838 cpu : usr=98.65%, sys=0.98%, ctx=14, majf=0, minf=21 00:33:52.838 IO depths : 1=0.4%, 2=6.7%, 4=25.0%, 8=55.8%, 16=12.1%, 32=0.0%, >=64=0.0% 00:33:52.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.838 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.838 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.838 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.838 filename0: (groupid=0, jobs=1): err= 0: pid=1451070: Wed Nov 20 07:29:55 2024 00:33:52.838 read: IOPS=60, BW=244KiB/s (250kB/s)(2456KiB/10068msec) 00:33:52.838 slat (nsec): min=4644, max=29425, avg=8743.02, stdev=2472.24 00:33:52.838 clat (msec): min=186, max=414, avg=261.84, stdev=57.94 00:33:52.838 lat (msec): min=186, max=414, avg=261.85, stdev=57.94 00:33:52.838 clat percentiles (msec): 00:33:52.838 | 1.00th=[ 188], 5.00th=[ 190], 10.00th=[ 199], 20.00th=[ 218], 00:33:52.838 | 30.00th=[ 230], 40.00th=[ 253], 50.00th=[ 255], 60.00th=[ 257], 00:33:52.838 | 70.00th=[ 275], 80.00th=[ 284], 90.00th=[ 363], 95.00th=[ 401], 00:33:52.838 | 99.00th=[ 414], 99.50th=[ 414], 99.90th=[ 414], 99.95th=[ 414], 00:33:52.838 | 99.99th=[ 414] 00:33:52.838 bw ( KiB/s): min= 128, max= 368, per=4.00%, avg=239.20, stdev=53.06, samples=20 00:33:52.838 iops : min= 32, max= 92, avg=59.80, stdev=13.26, samples=20 00:33:52.838 lat (msec) : 250=36.81%, 500=63.19% 00:33:52.838 cpu : usr=98.83%, sys=0.80%, ctx=13, majf=0, minf=18 00:33:52.838 IO depths : 1=0.5%, 2=1.8%, 4=9.1%, 8=75.9%, 16=12.7%, 32=0.0%, >=64=0.0% 00:33:52.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.838 complete : 0=0.0%, 4=89.3%, 8=6.1%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.838 issued rwts: total=614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.838 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.838 filename0: (groupid=0, jobs=1): err= 0: pid=1451071: Wed Nov 20 07:29:55 2024 00:33:52.838 read: IOPS=44, BW=178KiB/s (182kB/s)(1792KiB/10068msec) 00:33:52.838 slat (nsec): min=4410, max=22480, avg=8792.81, stdev=2521.68 00:33:52.838 clat (msec): min=197, max=532, avg=359.47, stdev=62.62 00:33:52.838 lat (msec): min=197, max=532, avg=359.48, stdev=62.62 00:33:52.838 clat percentiles (msec): 00:33:52.838 | 1.00th=[ 199], 5.00th=[ 245], 10.00th=[ 255], 20.00th=[ 317], 00:33:52.838 | 30.00th=[ 355], 40.00th=[ 368], 50.00th=[ 372], 60.00th=[ 384], 00:33:52.838 | 70.00th=[ 393], 80.00th=[ 397], 90.00th=[ 414], 95.00th=[ 414], 00:33:52.838 | 99.00th=[ 514], 99.50th=[ 514], 99.90th=[ 531], 99.95th=[ 531], 00:33:52.838 | 99.99th=[ 531] 00:33:52.838 bw ( KiB/s): min= 112, max= 272, per=2.88%, avg=172.80, stdev=63.07, samples=20 00:33:52.838 iops : min= 28, max= 68, avg=43.20, stdev=15.77, samples=20 00:33:52.838 lat (msec) : 250=8.48%, 500=88.39%, 750=3.12% 00:33:52.838 cpu : usr=98.77%, sys=0.86%, ctx=14, majf=0, minf=17 00:33:52.838 IO depths : 1=4.5%, 2=10.7%, 4=25.0%, 8=51.8%, 16=8.0%, 32=0.0%, >=64=0.0% 00:33:52.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.838 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.838 issued rwts: total=448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.838 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.838 filename0: (groupid=0, jobs=1): err= 0: pid=1451072: Wed Nov 20 07:29:55 2024 00:33:52.838 read: IOPS=66, BW=266KiB/s (272kB/s)(2680KiB/10091msec) 00:33:52.838 slat (nsec): min=6765, max=24268, avg=8632.30, stdev=1900.23 00:33:52.838 clat (msec): min=165, max=270, avg=240.65, stdev=26.77 00:33:52.838 lat (msec): min=165, max=270, avg=240.66, stdev=26.77 00:33:52.838 clat percentiles (msec): 00:33:52.838 | 1.00th=[ 165], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 205], 00:33:52.838 | 30.00th=[ 251], 40.00th=[ 253], 50.00th=[ 253], 60.00th=[ 255], 00:33:52.838 | 70.00th=[ 255], 80.00th=[ 257], 90.00th=[ 259], 95.00th=[ 259], 00:33:52.838 | 99.00th=[ 262], 99.50th=[ 262], 99.90th=[ 271], 99.95th=[ 271], 00:33:52.838 | 99.99th=[ 271] 00:33:52.838 bw ( KiB/s): min= 256, max= 384, per=4.39%, avg=262.40, stdev=28.62, samples=20 00:33:52.838 iops : min= 64, max= 96, avg=65.60, stdev= 7.16, samples=20 00:33:52.838 lat (msec) : 250=34.03%, 500=65.97% 00:33:52.838 cpu : usr=98.67%, sys=0.97%, ctx=13, majf=0, minf=18 00:33:52.838 IO depths : 1=1.0%, 2=7.3%, 4=25.1%, 8=55.2%, 16=11.3%, 32=0.0%, >=64=0.0% 00:33:52.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.838 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.838 issued rwts: total=670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.838 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.838 filename0: (groupid=0, jobs=1): err= 0: pid=1451073: Wed Nov 20 07:29:55 2024 00:33:52.838 read: IOPS=66, BW=267KiB/s (273kB/s)(2696KiB/10105msec) 00:33:52.838 slat (nsec): min=6820, max=34180, avg=8853.29, stdev=3429.09 00:33:52.838 clat (msec): min=78, max=406, avg=239.01, stdev=41.09 00:33:52.838 lat (msec): min=78, max=406, avg=239.02, stdev=41.09 00:33:52.838 clat percentiles (msec): 00:33:52.838 | 1.00th=[ 80], 5.00th=[ 171], 10.00th=[ 190], 20.00th=[ 226], 00:33:52.838 | 30.00th=[ 251], 40.00th=[ 253], 50.00th=[ 253], 60.00th=[ 255], 00:33:52.838 | 70.00th=[ 255], 80.00th=[ 257], 90.00th=[ 259], 95.00th=[ 259], 00:33:52.838 | 99.00th=[ 292], 99.50th=[ 405], 99.90th=[ 405], 99.95th=[ 405], 00:33:52.838 | 99.99th=[ 405] 00:33:52.838 bw ( KiB/s): min= 176, max= 384, per=4.40%, avg=263.20, stdev=37.24, samples=20 00:33:52.838 iops : min= 44, max= 96, avg=65.80, stdev= 9.31, samples=20 00:33:52.838 lat (msec) : 100=2.37%, 250=25.82%, 500=71.81% 00:33:52.838 cpu : usr=98.75%, sys=0.89%, ctx=8, majf=0, minf=20 00:33:52.838 IO depths : 1=0.6%, 2=1.3%, 4=8.3%, 8=77.7%, 16=12.0%, 32=0.0%, >=64=0.0% 00:33:52.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.838 complete : 0=0.0%, 4=89.3%, 8=5.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.838 issued rwts: total=674,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.838 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.838 filename0: (groupid=0, jobs=1): err= 0: pid=1451074: Wed Nov 20 07:29:55 2024 00:33:52.838 read: IOPS=63, BW=255KiB/s (261kB/s)(2568KiB/10067msec) 00:33:52.838 slat (nsec): min=6840, max=25880, avg=8580.73, stdev=2068.81 00:33:52.838 clat (msec): min=186, max=413, avg=250.07, stdev=39.53 00:33:52.838 lat (msec): min=186, max=413, avg=250.08, stdev=39.53 00:33:52.838 clat percentiles (msec): 00:33:52.838 | 1.00th=[ 188], 5.00th=[ 190], 10.00th=[ 199], 20.00th=[ 245], 00:33:52.838 | 30.00th=[ 251], 40.00th=[ 253], 50.00th=[ 255], 60.00th=[ 255], 00:33:52.838 | 70.00th=[ 255], 80.00th=[ 257], 90.00th=[ 259], 95.00th=[ 300], 00:33:52.838 | 99.00th=[ 414], 99.50th=[ 414], 99.90th=[ 414], 99.95th=[ 414], 00:33:52.838 | 99.99th=[ 414] 00:33:52.838 bw ( KiB/s): min= 128, max= 368, per=4.19%, avg=250.40, stdev=47.94, samples=20 00:33:52.838 iops : min= 32, max= 92, avg=62.60, stdev=11.98, samples=20 00:33:52.838 lat (msec) : 250=28.66%, 500=71.34% 00:33:52.838 cpu : usr=98.90%, sys=0.75%, ctx=13, majf=0, minf=34 00:33:52.838 IO depths : 1=0.6%, 2=1.4%, 4=8.4%, 8=77.6%, 16=12.0%, 32=0.0%, >=64=0.0% 00:33:52.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.838 complete : 0=0.0%, 4=89.3%, 8=5.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.838 issued rwts: total=642,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.838 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.838 filename1: (groupid=0, jobs=1): err= 0: pid=1451075: Wed Nov 20 07:29:55 2024 00:33:52.838 read: IOPS=66, BW=266KiB/s (272kB/s)(2688KiB/10105msec) 00:33:52.838 slat (nsec): min=6570, max=44216, avg=10058.54, stdev=5405.99 00:33:52.838 clat (msec): min=71, max=407, avg=239.93, stdev=53.98 00:33:52.838 lat (msec): min=71, max=407, avg=239.94, stdev=53.98 00:33:52.838 clat percentiles (msec): 00:33:52.838 | 1.00th=[ 72], 5.00th=[ 157], 10.00th=[ 190], 20.00th=[ 203], 00:33:52.838 | 30.00th=[ 222], 40.00th=[ 251], 50.00th=[ 253], 60.00th=[ 255], 00:33:52.838 | 70.00th=[ 257], 80.00th=[ 257], 90.00th=[ 300], 95.00th=[ 313], 00:33:52.838 | 99.00th=[ 405], 99.50th=[ 409], 99.90th=[ 409], 99.95th=[ 409], 00:33:52.838 | 99.99th=[ 409] 00:33:52.838 bw ( KiB/s): min= 176, max= 384, per=4.39%, avg=262.40, stdev=47.97, samples=20 00:33:52.838 iops : min= 44, max= 96, avg=65.60, stdev=11.99, samples=20 00:33:52.838 lat (msec) : 100=4.17%, 250=33.04%, 500=62.80% 00:33:52.838 cpu : usr=99.05%, sys=0.59%, ctx=26, majf=0, minf=32 00:33:52.838 IO depths : 1=0.1%, 2=0.6%, 4=7.1%, 8=79.5%, 16=12.6%, 32=0.0%, >=64=0.0% 00:33:52.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.838 complete : 0=0.0%, 4=88.9%, 8=5.9%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.838 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.838 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.838 filename1: (groupid=0, jobs=1): err= 0: pid=1451076: Wed Nov 20 07:29:55 2024 00:33:52.838 read: IOPS=63, BW=255KiB/s (261kB/s)(2568KiB/10079msec) 00:33:52.838 slat (nsec): min=6547, max=24787, avg=9050.21, stdev=3136.70 00:33:52.839 clat (msec): min=186, max=405, avg=250.46, stdev=26.68 00:33:52.839 lat (msec): min=186, max=405, avg=250.47, stdev=26.68 00:33:52.839 clat percentiles (msec): 00:33:52.839 | 1.00th=[ 188], 5.00th=[ 201], 10.00th=[ 201], 20.00th=[ 249], 00:33:52.839 | 30.00th=[ 251], 40.00th=[ 253], 50.00th=[ 255], 60.00th=[ 255], 00:33:52.839 | 70.00th=[ 257], 80.00th=[ 257], 90.00th=[ 262], 95.00th=[ 296], 00:33:52.839 | 99.00th=[ 355], 99.50th=[ 405], 99.90th=[ 405], 99.95th=[ 405], 00:33:52.839 | 99.99th=[ 405] 00:33:52.839 bw ( KiB/s): min= 128, max= 384, per=4.19%, avg=250.40, stdev=45.63, samples=20 00:33:52.839 iops : min= 32, max= 96, avg=62.60, stdev=11.41, samples=20 00:33:52.839 lat (msec) : 250=29.60%, 500=70.40% 00:33:52.839 cpu : usr=98.89%, sys=0.76%, ctx=14, majf=0, minf=26 00:33:52.839 IO depths : 1=0.5%, 2=1.4%, 4=8.9%, 8=77.1%, 16=12.1%, 32=0.0%, >=64=0.0% 00:33:52.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.839 complete : 0=0.0%, 4=89.5%, 8=5.1%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.839 issued rwts: total=642,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.839 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.839 filename1: (groupid=0, jobs=1): err= 0: pid=1451078: Wed Nov 20 07:29:55 2024 00:33:52.839 read: IOPS=59, BW=237KiB/s (243kB/s)(2392KiB/10072msec) 00:33:52.839 slat (nsec): min=4596, max=20631, avg=8651.28, stdev=2254.59 00:33:52.839 clat (msec): min=168, max=414, avg=268.74, stdev=55.27 00:33:52.839 lat (msec): min=168, max=414, avg=268.75, stdev=55.27 00:33:52.839 clat percentiles (msec): 00:33:52.839 | 1.00th=[ 169], 5.00th=[ 201], 10.00th=[ 207], 20.00th=[ 236], 00:33:52.839 | 30.00th=[ 245], 40.00th=[ 253], 50.00th=[ 255], 60.00th=[ 257], 00:33:52.839 | 70.00th=[ 268], 80.00th=[ 275], 90.00th=[ 372], 95.00th=[ 397], 00:33:52.839 | 99.00th=[ 414], 99.50th=[ 414], 99.90th=[ 414], 99.95th=[ 414], 00:33:52.839 | 99.99th=[ 414] 00:33:52.839 bw ( KiB/s): min= 128, max= 256, per=3.88%, avg=232.80, stdev=32.62, samples=20 00:33:52.839 iops : min= 32, max= 64, avg=58.20, stdev= 8.15, samples=20 00:33:52.839 lat (msec) : 250=35.45%, 500=64.55% 00:33:52.839 cpu : usr=98.67%, sys=0.97%, ctx=17, majf=0, minf=20 00:33:52.839 IO depths : 1=0.7%, 2=2.2%, 4=9.7%, 8=74.9%, 16=12.5%, 32=0.0%, >=64=0.0% 00:33:52.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.839 complete : 0=0.0%, 4=89.5%, 8=5.9%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.839 issued rwts: total=598,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.839 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.839 filename1: (groupid=0, jobs=1): err= 0: pid=1451079: Wed Nov 20 07:29:55 2024 00:33:52.839 read: IOPS=66, BW=265KiB/s (272kB/s)(2688KiB/10127msec) 00:33:52.839 slat (nsec): min=6735, max=18962, avg=8712.19, stdev=2002.13 00:33:52.839 clat (msec): min=134, max=304, avg=240.20, stdev=27.42 00:33:52.839 lat (msec): min=134, max=304, avg=240.21, stdev=27.42 00:33:52.839 clat percentiles (msec): 00:33:52.839 | 1.00th=[ 165], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 205], 00:33:52.839 | 30.00th=[ 251], 40.00th=[ 253], 50.00th=[ 253], 60.00th=[ 255], 00:33:52.839 | 70.00th=[ 255], 80.00th=[ 257], 90.00th=[ 257], 95.00th=[ 259], 00:33:52.839 | 99.00th=[ 262], 99.50th=[ 262], 99.90th=[ 305], 99.95th=[ 305], 00:33:52.839 | 99.99th=[ 305] 00:33:52.839 bw ( KiB/s): min= 256, max= 368, per=4.39%, avg=262.40, stdev=25.11, samples=20 00:33:52.839 iops : min= 64, max= 92, avg=65.60, stdev= 6.28, samples=20 00:33:52.839 lat (msec) : 250=34.52%, 500=65.48% 00:33:52.839 cpu : usr=98.63%, sys=1.03%, ctx=9, majf=0, minf=21 00:33:52.839 IO depths : 1=0.1%, 2=6.4%, 4=25.0%, 8=56.1%, 16=12.4%, 32=0.0%, >=64=0.0% 00:33:52.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.839 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.839 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.839 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.839 filename1: (groupid=0, jobs=1): err= 0: pid=1451080: Wed Nov 20 07:29:55 2024 00:33:52.839 read: IOPS=67, BW=269KiB/s (275kB/s)(2720KiB/10127msec) 00:33:52.839 slat (nsec): min=6513, max=45966, avg=10413.42, stdev=5259.12 00:33:52.839 clat (msec): min=22, max=404, avg=237.29, stdev=67.01 00:33:52.839 lat (msec): min=22, max=404, avg=237.30, stdev=67.01 00:33:52.839 clat percentiles (msec): 00:33:52.839 | 1.00th=[ 23], 5.00th=[ 77], 10.00th=[ 186], 20.00th=[ 201], 00:33:52.839 | 30.00th=[ 218], 40.00th=[ 251], 50.00th=[ 253], 60.00th=[ 255], 00:33:52.839 | 70.00th=[ 257], 80.00th=[ 257], 90.00th=[ 296], 95.00th=[ 355], 00:33:52.839 | 99.00th=[ 405], 99.50th=[ 405], 99.90th=[ 405], 99.95th=[ 405], 00:33:52.839 | 99.99th=[ 405] 00:33:52.839 bw ( KiB/s): min= 176, max= 512, per=4.44%, avg=265.60, stdev=70.30, samples=20 00:33:52.839 iops : min= 44, max= 128, avg=66.40, stdev=17.58, samples=20 00:33:52.839 lat (msec) : 50=2.35%, 100=4.71%, 250=31.47%, 500=61.47% 00:33:52.839 cpu : usr=98.62%, sys=1.01%, ctx=28, majf=0, minf=23 00:33:52.839 IO depths : 1=0.6%, 2=1.5%, 4=8.2%, 8=77.4%, 16=12.4%, 32=0.0%, >=64=0.0% 00:33:52.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.839 complete : 0=0.0%, 4=89.2%, 8=5.9%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.839 issued rwts: total=680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.839 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.839 filename1: (groupid=0, jobs=1): err= 0: pid=1451081: Wed Nov 20 07:29:55 2024 00:33:52.839 read: IOPS=65, BW=263KiB/s (269kB/s)(2656KiB/10105msec) 00:33:52.839 slat (nsec): min=6829, max=39103, avg=10075.62, stdev=5097.18 00:33:52.839 clat (msec): min=72, max=404, avg=242.53, stdev=58.03 00:33:52.839 lat (msec): min=72, max=404, avg=242.54, stdev=58.02 00:33:52.839 clat percentiles (msec): 00:33:52.839 | 1.00th=[ 72], 5.00th=[ 161], 10.00th=[ 190], 20.00th=[ 207], 00:33:52.839 | 30.00th=[ 222], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 255], 00:33:52.839 | 70.00th=[ 257], 80.00th=[ 259], 90.00th=[ 296], 95.00th=[ 359], 00:33:52.839 | 99.00th=[ 401], 99.50th=[ 405], 99.90th=[ 405], 99.95th=[ 405], 00:33:52.839 | 99.99th=[ 405] 00:33:52.839 bw ( KiB/s): min= 176, max= 384, per=4.34%, avg=259.20, stdev=45.14, samples=20 00:33:52.839 iops : min= 44, max= 96, avg=64.80, stdev=11.28, samples=20 00:33:52.839 lat (msec) : 100=4.82%, 250=32.53%, 500=62.65% 00:33:52.839 cpu : usr=99.06%, sys=0.57%, ctx=14, majf=0, minf=30 00:33:52.839 IO depths : 1=0.5%, 2=1.1%, 4=7.4%, 8=78.6%, 16=12.5%, 32=0.0%, >=64=0.0% 00:33:52.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.839 complete : 0=0.0%, 4=88.9%, 8=6.1%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.839 issued rwts: total=664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.839 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.839 filename1: (groupid=0, jobs=1): err= 0: pid=1451082: Wed Nov 20 07:29:55 2024 00:33:52.839 read: IOPS=44, BW=177KiB/s (181kB/s)(1784KiB/10068msec) 00:33:52.839 slat (nsec): min=5686, max=25879, avg=8655.25, stdev=2374.94 00:33:52.839 clat (msec): min=163, max=532, avg=360.98, stdev=74.04 00:33:52.839 lat (msec): min=163, max=532, avg=360.99, stdev=74.04 00:33:52.839 clat percentiles (msec): 00:33:52.839 | 1.00th=[ 163], 5.00th=[ 188], 10.00th=[ 259], 20.00th=[ 334], 00:33:52.839 | 30.00th=[ 359], 40.00th=[ 368], 50.00th=[ 368], 60.00th=[ 384], 00:33:52.839 | 70.00th=[ 393], 80.00th=[ 397], 90.00th=[ 414], 95.00th=[ 502], 00:33:52.839 | 99.00th=[ 531], 99.50th=[ 531], 99.90th=[ 531], 99.95th=[ 531], 00:33:52.839 | 99.99th=[ 531] 00:33:52.839 bw ( KiB/s): min= 128, max= 368, per=3.03%, avg=181.05, stdev=72.75, samples=19 00:33:52.839 iops : min= 32, max= 92, avg=45.26, stdev=18.19, samples=19 00:33:52.839 lat (msec) : 250=7.17%, 500=87.44%, 750=5.38% 00:33:52.839 cpu : usr=98.52%, sys=1.12%, ctx=13, majf=0, minf=18 00:33:52.839 IO depths : 1=4.3%, 2=10.5%, 4=25.1%, 8=52.0%, 16=8.1%, 32=0.0%, >=64=0.0% 00:33:52.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.839 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.839 issued rwts: total=446,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.839 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.839 filename1: (groupid=0, jobs=1): err= 0: pid=1451083: Wed Nov 20 07:29:55 2024 00:33:52.839 read: IOPS=64, BW=256KiB/s (263kB/s)(2584KiB/10078msec) 00:33:52.839 slat (nsec): min=6125, max=28910, avg=8379.47, stdev=1914.95 00:33:52.839 clat (msec): min=186, max=425, avg=249.06, stdev=40.69 00:33:52.839 lat (msec): min=186, max=425, avg=249.07, stdev=40.69 00:33:52.839 clat percentiles (msec): 00:33:52.839 | 1.00th=[ 186], 5.00th=[ 190], 10.00th=[ 199], 20.00th=[ 226], 00:33:52.839 | 30.00th=[ 251], 40.00th=[ 253], 50.00th=[ 253], 60.00th=[ 255], 00:33:52.839 | 70.00th=[ 257], 80.00th=[ 257], 90.00th=[ 259], 95.00th=[ 284], 00:33:52.839 | 99.00th=[ 426], 99.50th=[ 426], 99.90th=[ 426], 99.95th=[ 426], 00:33:52.839 | 99.99th=[ 426] 00:33:52.839 bw ( KiB/s): min= 128, max= 384, per=4.22%, avg=252.00, stdev=46.10, samples=20 00:33:52.839 iops : min= 32, max= 96, avg=63.00, stdev=11.53, samples=20 00:33:52.839 lat (msec) : 250=29.88%, 500=70.12% 00:33:52.839 cpu : usr=98.79%, sys=0.86%, ctx=14, majf=0, minf=20 00:33:52.839 IO depths : 1=2.8%, 2=6.3%, 4=16.9%, 8=64.2%, 16=9.8%, 32=0.0%, >=64=0.0% 00:33:52.839 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.839 complete : 0=0.0%, 4=91.7%, 8=2.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.839 issued rwts: total=646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.839 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.839 filename2: (groupid=0, jobs=1): err= 0: pid=1451084: Wed Nov 20 07:29:55 2024 00:33:52.839 read: IOPS=66, BW=267KiB/s (273kB/s)(2688KiB/10086msec) 00:33:52.839 slat (nsec): min=6638, max=21230, avg=8755.81, stdev=2165.95 00:33:52.839 clat (msec): min=164, max=260, avg=240.06, stdev=27.00 00:33:52.839 lat (msec): min=164, max=260, avg=240.07, stdev=27.00 00:33:52.839 clat percentiles (msec): 00:33:52.839 | 1.00th=[ 165], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 205], 00:33:52.839 | 30.00th=[ 251], 40.00th=[ 253], 50.00th=[ 253], 60.00th=[ 255], 00:33:52.839 | 70.00th=[ 255], 80.00th=[ 257], 90.00th=[ 257], 95.00th=[ 259], 00:33:52.839 | 99.00th=[ 262], 99.50th=[ 262], 99.90th=[ 262], 99.95th=[ 262], 00:33:52.839 | 99.99th=[ 262] 00:33:52.839 bw ( KiB/s): min= 256, max= 384, per=4.39%, avg=262.40, stdev=28.62, samples=20 00:33:52.839 iops : min= 64, max= 96, avg=65.60, stdev= 7.16, samples=20 00:33:52.839 lat (msec) : 250=35.71%, 500=64.29% 00:33:52.839 cpu : usr=98.81%, sys=0.84%, ctx=14, majf=0, minf=18 00:33:52.839 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:52.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.840 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.840 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.840 filename2: (groupid=0, jobs=1): err= 0: pid=1451085: Wed Nov 20 07:29:55 2024 00:33:52.840 read: IOPS=62, BW=249KiB/s (255kB/s)(2504KiB/10068msec) 00:33:52.840 slat (nsec): min=4557, max=22971, avg=8698.79, stdev=2295.94 00:33:52.840 clat (msec): min=174, max=415, avg=256.76, stdev=38.41 00:33:52.840 lat (msec): min=174, max=415, avg=256.77, stdev=38.41 00:33:52.840 clat percentiles (msec): 00:33:52.840 | 1.00th=[ 190], 5.00th=[ 194], 10.00th=[ 232], 20.00th=[ 251], 00:33:52.840 | 30.00th=[ 251], 40.00th=[ 253], 50.00th=[ 255], 60.00th=[ 255], 00:33:52.840 | 70.00th=[ 257], 80.00th=[ 257], 90.00th=[ 262], 95.00th=[ 326], 00:33:52.840 | 99.00th=[ 414], 99.50th=[ 414], 99.90th=[ 418], 99.95th=[ 418], 00:33:52.840 | 99.99th=[ 418] 00:33:52.840 bw ( KiB/s): min= 128, max= 336, per=4.07%, avg=244.00, stdev=42.13, samples=20 00:33:52.840 iops : min= 32, max= 84, avg=61.00, stdev=10.53, samples=20 00:33:52.840 lat (msec) : 250=23.00%, 500=77.00% 00:33:52.840 cpu : usr=98.74%, sys=0.91%, ctx=15, majf=0, minf=21 00:33:52.840 IO depths : 1=0.5%, 2=1.3%, 4=8.5%, 8=77.6%, 16=12.1%, 32=0.0%, >=64=0.0% 00:33:52.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.840 complete : 0=0.0%, 4=89.3%, 8=5.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.840 issued rwts: total=626,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.840 filename2: (groupid=0, jobs=1): err= 0: pid=1451086: Wed Nov 20 07:29:55 2024 00:33:52.840 read: IOPS=66, BW=265KiB/s (272kB/s)(2680KiB/10100msec) 00:33:52.840 slat (nsec): min=6846, max=25490, avg=8669.40, stdev=2159.19 00:33:52.840 clat (msec): min=165, max=331, avg=240.89, stdev=27.29 00:33:52.840 lat (msec): min=165, max=331, avg=240.90, stdev=27.29 00:33:52.840 clat percentiles (msec): 00:33:52.840 | 1.00th=[ 165], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 205], 00:33:52.840 | 30.00th=[ 251], 40.00th=[ 253], 50.00th=[ 253], 60.00th=[ 255], 00:33:52.840 | 70.00th=[ 255], 80.00th=[ 257], 90.00th=[ 259], 95.00th=[ 259], 00:33:52.840 | 99.00th=[ 268], 99.50th=[ 268], 99.90th=[ 330], 99.95th=[ 330], 00:33:52.840 | 99.99th=[ 330] 00:33:52.840 bw ( KiB/s): min= 240, max= 384, per=4.37%, avg=261.60, stdev=29.49, samples=20 00:33:52.840 iops : min= 60, max= 96, avg=65.40, stdev= 7.37, samples=20 00:33:52.840 lat (msec) : 250=34.63%, 500=65.37% 00:33:52.840 cpu : usr=98.64%, sys=1.00%, ctx=9, majf=0, minf=19 00:33:52.840 IO depths : 1=0.1%, 2=6.4%, 4=25.1%, 8=56.1%, 16=12.2%, 32=0.0%, >=64=0.0% 00:33:52.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.840 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.840 issued rwts: total=670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.840 filename2: (groupid=0, jobs=1): err= 0: pid=1451087: Wed Nov 20 07:29:55 2024 00:33:52.840 read: IOPS=60, BW=244KiB/s (250kB/s)(2456KiB/10069msec) 00:33:52.840 slat (nsec): min=4675, max=31234, avg=8722.53, stdev=2481.11 00:33:52.840 clat (msec): min=163, max=413, avg=261.69, stdev=50.74 00:33:52.840 lat (msec): min=163, max=413, avg=261.70, stdev=50.74 00:33:52.840 clat percentiles (msec): 00:33:52.840 | 1.00th=[ 180], 5.00th=[ 188], 10.00th=[ 199], 20.00th=[ 236], 00:33:52.840 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 255], 60.00th=[ 257], 00:33:52.840 | 70.00th=[ 266], 80.00th=[ 275], 90.00th=[ 363], 95.00th=[ 393], 00:33:52.840 | 99.00th=[ 401], 99.50th=[ 414], 99.90th=[ 414], 99.95th=[ 414], 00:33:52.840 | 99.99th=[ 414] 00:33:52.840 bw ( KiB/s): min= 128, max= 336, per=4.00%, avg=239.20, stdev=42.32, samples=20 00:33:52.840 iops : min= 32, max= 84, avg=59.80, stdev=10.58, samples=20 00:33:52.840 lat (msec) : 250=36.48%, 500=63.52% 00:33:52.840 cpu : usr=98.79%, sys=0.85%, ctx=13, majf=0, minf=21 00:33:52.840 IO depths : 1=0.3%, 2=0.8%, 4=6.7%, 8=79.3%, 16=12.9%, 32=0.0%, >=64=0.0% 00:33:52.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.840 complete : 0=0.0%, 4=88.6%, 8=6.8%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.840 issued rwts: total=614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.840 filename2: (groupid=0, jobs=1): err= 0: pid=1451088: Wed Nov 20 07:29:55 2024 00:33:52.840 read: IOPS=42, BW=172KiB/s (176kB/s)(1728KiB/10066msec) 00:33:52.840 slat (nsec): min=6839, max=44289, avg=9523.86, stdev=5293.86 00:33:52.840 clat (msec): min=186, max=602, avg=371.98, stdev=66.07 00:33:52.840 lat (msec): min=186, max=602, avg=371.99, stdev=66.07 00:33:52.840 clat percentiles (msec): 00:33:52.840 | 1.00th=[ 234], 5.00th=[ 257], 10.00th=[ 259], 20.00th=[ 334], 00:33:52.840 | 30.00th=[ 363], 40.00th=[ 368], 50.00th=[ 372], 60.00th=[ 393], 00:33:52.840 | 70.00th=[ 397], 80.00th=[ 401], 90.00th=[ 418], 95.00th=[ 514], 00:33:52.840 | 99.00th=[ 531], 99.50th=[ 531], 99.90th=[ 600], 99.95th=[ 600], 00:33:52.840 | 99.99th=[ 600] 00:33:52.840 bw ( KiB/s): min= 128, max= 256, per=2.93%, avg=175.16, stdev=56.81, samples=19 00:33:52.840 iops : min= 32, max= 64, avg=43.79, stdev=14.20, samples=19 00:33:52.840 lat (msec) : 250=2.78%, 500=88.89%, 750=8.33% 00:33:52.840 cpu : usr=98.80%, sys=0.85%, ctx=15, majf=0, minf=20 00:33:52.840 IO depths : 1=3.2%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.3%, 32=0.0%, >=64=0.0% 00:33:52.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.840 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.840 issued rwts: total=432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.840 filename2: (groupid=0, jobs=1): err= 0: pid=1451089: Wed Nov 20 07:29:55 2024 00:33:52.840 read: IOPS=67, BW=272KiB/s (278kB/s)(2752KiB/10124msec) 00:33:52.840 slat (nsec): min=6861, max=35347, avg=10762.10, stdev=5614.16 00:33:52.840 clat (msec): min=20, max=406, avg=234.44, stdev=62.30 00:33:52.840 lat (msec): min=20, max=406, avg=234.45, stdev=62.30 00:33:52.840 clat percentiles (msec): 00:33:52.840 | 1.00th=[ 21], 5.00th=[ 77], 10.00th=[ 186], 20.00th=[ 201], 00:33:52.840 | 30.00th=[ 226], 40.00th=[ 251], 50.00th=[ 253], 60.00th=[ 255], 00:33:52.840 | 70.00th=[ 257], 80.00th=[ 257], 90.00th=[ 259], 95.00th=[ 305], 00:33:52.840 | 99.00th=[ 401], 99.50th=[ 405], 99.90th=[ 405], 99.95th=[ 405], 00:33:52.840 | 99.99th=[ 405] 00:33:52.840 bw ( KiB/s): min= 176, max= 512, per=4.49%, avg=268.80, stdev=67.80, samples=20 00:33:52.840 iops : min= 44, max= 128, avg=67.20, stdev=16.95, samples=20 00:33:52.840 lat (msec) : 50=2.33%, 100=4.65%, 250=29.94%, 500=63.08% 00:33:52.840 cpu : usr=98.73%, sys=0.89%, ctx=14, majf=0, minf=29 00:33:52.840 IO depths : 1=0.7%, 2=1.6%, 4=8.4%, 8=77.2%, 16=12.1%, 32=0.0%, >=64=0.0% 00:33:52.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.840 complete : 0=0.0%, 4=89.2%, 8=5.6%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.840 issued rwts: total=688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.840 filename2: (groupid=0, jobs=1): err= 0: pid=1451091: Wed Nov 20 07:29:55 2024 00:33:52.840 read: IOPS=68, BW=274KiB/s (281kB/s)(2776KiB/10130msec) 00:33:52.840 slat (nsec): min=6548, max=41796, avg=17026.24, stdev=4746.42 00:33:52.840 clat (msec): min=26, max=356, avg=233.18, stdev=53.84 00:33:52.840 lat (msec): min=26, max=356, avg=233.20, stdev=53.84 00:33:52.840 clat percentiles (msec): 00:33:52.840 | 1.00th=[ 27], 5.00th=[ 77], 10.00th=[ 194], 20.00th=[ 207], 00:33:52.840 | 30.00th=[ 249], 40.00th=[ 251], 50.00th=[ 253], 60.00th=[ 255], 00:33:52.840 | 70.00th=[ 255], 80.00th=[ 257], 90.00th=[ 259], 95.00th=[ 259], 00:33:52.840 | 99.00th=[ 317], 99.50th=[ 355], 99.90th=[ 359], 99.95th=[ 359], 00:33:52.840 | 99.99th=[ 359] 00:33:52.840 bw ( KiB/s): min= 208, max= 512, per=4.54%, avg=271.20, stdev=63.68, samples=20 00:33:52.840 iops : min= 52, max= 128, avg=67.80, stdev=15.92, samples=20 00:33:52.840 lat (msec) : 50=2.31%, 100=4.32%, 250=28.24%, 500=65.13% 00:33:52.840 cpu : usr=98.46%, sys=1.15%, ctx=17, majf=0, minf=23 00:33:52.840 IO depths : 1=0.6%, 2=2.0%, 4=10.5%, 8=74.9%, 16=12.0%, 32=0.0%, >=64=0.0% 00:33:52.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.840 complete : 0=0.0%, 4=90.0%, 8=4.5%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.840 issued rwts: total=694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.840 filename2: (groupid=0, jobs=1): err= 0: pid=1451092: Wed Nov 20 07:29:55 2024 00:33:52.840 read: IOPS=63, BW=255KiB/s (261kB/s)(2568KiB/10069msec) 00:33:52.840 slat (nsec): min=4478, max=30766, avg=9281.36, stdev=3922.99 00:33:52.840 clat (msec): min=184, max=413, avg=250.23, stdev=32.04 00:33:52.840 lat (msec): min=184, max=413, avg=250.24, stdev=32.04 00:33:52.840 clat percentiles (msec): 00:33:52.840 | 1.00th=[ 186], 5.00th=[ 188], 10.00th=[ 201], 20.00th=[ 249], 00:33:52.840 | 30.00th=[ 251], 40.00th=[ 253], 50.00th=[ 255], 60.00th=[ 255], 00:33:52.840 | 70.00th=[ 257], 80.00th=[ 257], 90.00th=[ 262], 95.00th=[ 313], 00:33:52.840 | 99.00th=[ 368], 99.50th=[ 414], 99.90th=[ 414], 99.95th=[ 414], 00:33:52.840 | 99.99th=[ 414] 00:33:52.840 bw ( KiB/s): min= 128, max= 368, per=4.19%, avg=250.40, stdev=54.51, samples=20 00:33:52.840 iops : min= 32, max= 92, avg=62.60, stdev=13.63, samples=20 00:33:52.840 lat (msec) : 250=27.73%, 500=72.27% 00:33:52.840 cpu : usr=98.65%, sys=0.99%, ctx=17, majf=0, minf=17 00:33:52.840 IO depths : 1=0.6%, 2=1.4%, 4=8.4%, 8=77.6%, 16=12.0%, 32=0.0%, >=64=0.0% 00:33:52.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.840 complete : 0=0.0%, 4=89.3%, 8=5.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.840 issued rwts: total=642,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.840 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:52.840 00:33:52.840 Run status group 0 (all jobs): 00:33:52.840 READ: bw=5973KiB/s (6116kB/s), 172KiB/s-274KiB/s (176kB/s-281kB/s), io=59.1MiB (62.0MB), run=10066-10130msec 00:33:52.840 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:33:52.840 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:52.840 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:52.840 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.841 bdev_null0 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.841 [2024-11-20 07:29:56.188784] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.841 bdev_null1 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:52.841 { 00:33:52.841 "params": { 00:33:52.841 "name": "Nvme$subsystem", 00:33:52.841 "trtype": "$TEST_TRANSPORT", 00:33:52.841 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:52.841 "adrfam": "ipv4", 00:33:52.841 "trsvcid": "$NVMF_PORT", 00:33:52.841 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:52.841 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:52.841 "hdgst": ${hdgst:-false}, 00:33:52.841 "ddgst": ${ddgst:-false} 00:33:52.841 }, 00:33:52.841 "method": "bdev_nvme_attach_controller" 00:33:52.841 } 00:33:52.841 EOF 00:33:52.841 )") 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:52.841 07:29:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:52.841 { 00:33:52.842 "params": { 00:33:52.842 "name": "Nvme$subsystem", 00:33:52.842 "trtype": "$TEST_TRANSPORT", 00:33:52.842 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:52.842 "adrfam": "ipv4", 00:33:52.842 "trsvcid": "$NVMF_PORT", 00:33:52.842 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:52.842 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:52.842 "hdgst": ${hdgst:-false}, 00:33:52.842 "ddgst": ${ddgst:-false} 00:33:52.842 }, 00:33:52.842 "method": "bdev_nvme_attach_controller" 00:33:52.842 } 00:33:52.842 EOF 00:33:52.842 )") 00:33:52.842 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:52.842 07:29:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:52.842 07:29:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:52.842 07:29:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:52.842 07:29:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:52.842 07:29:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:52.842 "params": { 00:33:52.842 "name": "Nvme0", 00:33:52.842 "trtype": "tcp", 00:33:52.842 "traddr": "10.0.0.2", 00:33:52.842 "adrfam": "ipv4", 00:33:52.842 "trsvcid": "4420", 00:33:52.842 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:52.842 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:52.842 "hdgst": false, 00:33:52.842 "ddgst": false 00:33:52.842 }, 00:33:52.842 "method": "bdev_nvme_attach_controller" 00:33:52.842 },{ 00:33:52.842 "params": { 00:33:52.842 "name": "Nvme1", 00:33:52.842 "trtype": "tcp", 00:33:52.842 "traddr": "10.0.0.2", 00:33:52.842 "adrfam": "ipv4", 00:33:52.842 "trsvcid": "4420", 00:33:52.842 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:52.842 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:52.842 "hdgst": false, 00:33:52.842 "ddgst": false 00:33:52.842 }, 00:33:52.842 "method": "bdev_nvme_attach_controller" 00:33:52.842 }' 00:33:52.842 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:33:52.842 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:33:52.842 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:33:52.842 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:52.842 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:33:52.842 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:33:52.842 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:33:52.842 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:33:52.842 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:52.842 07:29:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:52.842 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:52.842 ... 00:33:52.842 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:52.842 ... 00:33:52.842 fio-3.35 00:33:52.842 Starting 4 threads 00:33:58.118 00:33:58.118 filename0: (groupid=0, jobs=1): err= 0: pid=1453050: Wed Nov 20 07:30:02 2024 00:33:58.118 read: IOPS=2979, BW=23.3MiB/s (24.4MB/s)(116MiB/5002msec) 00:33:58.118 slat (nsec): min=6224, max=42298, avg=10073.80, stdev=3747.96 00:33:58.118 clat (usec): min=623, max=5712, avg=2650.44, stdev=356.56 00:33:58.118 lat (usec): min=633, max=5719, avg=2660.52, stdev=356.36 00:33:58.118 clat percentiles (usec): 00:33:58.118 | 1.00th=[ 1713], 5.00th=[ 2147], 10.00th=[ 2278], 20.00th=[ 2409], 00:33:58.118 | 30.00th=[ 2507], 40.00th=[ 2540], 50.00th=[ 2638], 60.00th=[ 2704], 00:33:58.118 | 70.00th=[ 2802], 80.00th=[ 2933], 90.00th=[ 3064], 95.00th=[ 3163], 00:33:58.118 | 99.00th=[ 3589], 99.50th=[ 3818], 99.90th=[ 4555], 99.95th=[ 4752], 00:33:58.118 | 99.99th=[ 5735] 00:33:58.118 bw ( KiB/s): min=22272, max=25856, per=28.26%, avg=23776.00, stdev=1235.73, samples=9 00:33:58.118 iops : min= 2784, max= 3232, avg=2972.00, stdev=154.47, samples=9 00:33:58.118 lat (usec) : 750=0.01%, 1000=0.27% 00:33:58.118 lat (msec) : 2=2.08%, 4=97.29%, 10=0.36% 00:33:58.118 cpu : usr=96.08%, sys=3.56%, ctx=9, majf=0, minf=9 00:33:58.118 IO depths : 1=0.2%, 2=16.5%, 4=55.9%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:58.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.118 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.118 issued rwts: total=14902,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.118 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:58.118 filename0: (groupid=0, jobs=1): err= 0: pid=1453051: Wed Nov 20 07:30:02 2024 00:33:58.118 read: IOPS=2464, BW=19.3MiB/s (20.2MB/s)(96.3MiB/5001msec) 00:33:58.118 slat (nsec): min=6214, max=39567, avg=9658.45, stdev=4066.51 00:33:58.118 clat (usec): min=1017, max=5644, avg=3217.34, stdev=603.72 00:33:58.118 lat (usec): min=1029, max=5656, avg=3227.00, stdev=603.13 00:33:58.118 clat percentiles (usec): 00:33:58.118 | 1.00th=[ 2147], 5.00th=[ 2474], 10.00th=[ 2606], 20.00th=[ 2802], 00:33:58.118 | 30.00th=[ 2933], 40.00th=[ 2999], 50.00th=[ 3064], 60.00th=[ 3163], 00:33:58.118 | 70.00th=[ 3294], 80.00th=[ 3589], 90.00th=[ 4178], 95.00th=[ 4490], 00:33:58.118 | 99.00th=[ 5145], 99.50th=[ 5342], 99.90th=[ 5473], 99.95th=[ 5538], 00:33:58.118 | 99.99th=[ 5604] 00:33:58.118 bw ( KiB/s): min=18496, max=21536, per=23.65%, avg=19895.11, stdev=1060.75, samples=9 00:33:58.118 iops : min= 2312, max= 2692, avg=2486.89, stdev=132.59, samples=9 00:33:58.118 lat (msec) : 2=0.63%, 4=87.73%, 10=11.64% 00:33:58.118 cpu : usr=96.56%, sys=3.12%, ctx=7, majf=0, minf=9 00:33:58.118 IO depths : 1=0.2%, 2=5.7%, 4=66.3%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:58.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.118 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.118 issued rwts: total=12324,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.118 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:58.118 filename1: (groupid=0, jobs=1): err= 0: pid=1453052: Wed Nov 20 07:30:02 2024 00:33:58.118 read: IOPS=2494, BW=19.5MiB/s (20.4MB/s)(97.5MiB/5001msec) 00:33:58.119 slat (nsec): min=6198, max=40862, avg=9964.46, stdev=4265.31 00:33:58.119 clat (usec): min=604, max=5850, avg=3175.94, stdev=563.93 00:33:58.119 lat (usec): min=616, max=5856, avg=3185.91, stdev=563.32 00:33:58.119 clat percentiles (usec): 00:33:58.119 | 1.00th=[ 2212], 5.00th=[ 2474], 10.00th=[ 2573], 20.00th=[ 2737], 00:33:58.119 | 30.00th=[ 2900], 40.00th=[ 2999], 50.00th=[ 3064], 60.00th=[ 3163], 00:33:58.119 | 70.00th=[ 3294], 80.00th=[ 3556], 90.00th=[ 3949], 95.00th=[ 4359], 00:33:58.119 | 99.00th=[ 4883], 99.50th=[ 5145], 99.90th=[ 5407], 99.95th=[ 5669], 00:33:58.119 | 99.99th=[ 5866] 00:33:58.119 bw ( KiB/s): min=18592, max=20144, per=23.12%, avg=19448.00, stdev=561.74, samples=9 00:33:58.119 iops : min= 2324, max= 2518, avg=2431.00, stdev=70.22, samples=9 00:33:58.119 lat (usec) : 750=0.02%, 1000=0.02% 00:33:58.119 lat (msec) : 2=0.43%, 4=89.92%, 10=9.61% 00:33:58.119 cpu : usr=96.74%, sys=2.92%, ctx=7, majf=0, minf=9 00:33:58.119 IO depths : 1=0.1%, 2=5.1%, 4=67.8%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:58.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.119 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.119 issued rwts: total=12476,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.119 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:58.119 filename1: (groupid=0, jobs=1): err= 0: pid=1453053: Wed Nov 20 07:30:02 2024 00:33:58.119 read: IOPS=2577, BW=20.1MiB/s (21.1MB/s)(101MiB/5002msec) 00:33:58.119 slat (nsec): min=6091, max=48932, avg=10051.54, stdev=4088.03 00:33:58.119 clat (usec): min=1274, max=5724, avg=3073.43, stdev=551.70 00:33:58.119 lat (usec): min=1285, max=5735, avg=3083.49, stdev=551.13 00:33:58.119 clat percentiles (usec): 00:33:58.119 | 1.00th=[ 2073], 5.00th=[ 2442], 10.00th=[ 2540], 20.00th=[ 2671], 00:33:58.119 | 30.00th=[ 2802], 40.00th=[ 2900], 50.00th=[ 2999], 60.00th=[ 3064], 00:33:58.119 | 70.00th=[ 3130], 80.00th=[ 3359], 90.00th=[ 3785], 95.00th=[ 4359], 00:33:58.119 | 99.00th=[ 4948], 99.50th=[ 5145], 99.90th=[ 5407], 99.95th=[ 5669], 00:33:58.119 | 99.99th=[ 5735] 00:33:58.119 bw ( KiB/s): min=19824, max=22064, per=24.89%, avg=20936.89, stdev=737.18, samples=9 00:33:58.119 iops : min= 2478, max= 2758, avg=2617.11, stdev=92.15, samples=9 00:33:58.119 lat (msec) : 2=0.78%, 4=91.81%, 10=7.41% 00:33:58.119 cpu : usr=93.30%, sys=4.60%, ctx=275, majf=0, minf=9 00:33:58.119 IO depths : 1=0.3%, 2=4.6%, 4=66.5%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:58.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.119 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.119 issued rwts: total=12894,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.119 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:58.119 00:33:58.119 Run status group 0 (all jobs): 00:33:58.119 READ: bw=82.1MiB/s (86.1MB/s), 19.3MiB/s-23.3MiB/s (20.2MB/s-24.4MB/s), io=411MiB (431MB), run=5001-5002msec 00:33:58.119 07:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:33:58.119 07:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:58.119 07:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:58.119 07:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:58.119 07:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:58.119 07:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:58.119 07:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.119 07:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:58.119 07:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.119 07:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:58.119 07:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.119 07:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:58.119 07:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.119 07:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:58.119 07:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:58.119 07:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:58.119 07:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:58.119 07:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.119 07:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:58.119 07:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.119 07:30:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:58.119 07:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.119 07:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:58.119 07:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.119 00:33:58.119 real 0m24.501s 00:33:58.119 user 4m54.761s 00:33:58.119 sys 0m4.567s 00:33:58.119 07:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:58.119 07:30:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:58.119 ************************************ 00:33:58.119 END TEST fio_dif_rand_params 00:33:58.119 ************************************ 00:33:58.119 07:30:02 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:33:58.119 07:30:02 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:33:58.119 07:30:02 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:58.119 07:30:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:58.379 ************************************ 00:33:58.379 START TEST fio_dif_digest 00:33:58.379 ************************************ 00:33:58.379 07:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:33:58.379 07:30:02 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:33:58.379 07:30:02 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:33:58.379 07:30:02 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:33:58.379 07:30:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:33:58.379 07:30:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:33:58.379 07:30:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:33:58.379 07:30:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:33:58.379 07:30:02 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:33:58.379 07:30:02 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:33:58.379 07:30:02 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:33:58.379 07:30:02 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:33:58.379 07:30:02 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:33:58.379 07:30:02 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:33:58.379 07:30:02 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:33:58.379 07:30:02 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:33:58.379 07:30:02 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:58.379 07:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.379 07:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:58.379 bdev_null0 00:33:58.379 07:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.379 07:30:02 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:58.379 07:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.379 07:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:58.379 07:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.379 07:30:02 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:58.379 07:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.379 07:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:58.379 07:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.379 07:30:02 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:58.379 07:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:58.379 07:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:58.379 [2024-11-20 07:30:02.722995] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:58.379 07:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:58.379 07:30:02 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:33:58.379 07:30:02 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:33:58.379 07:30:02 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:58.379 07:30:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:33:58.379 07:30:02 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:58.379 07:30:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:33:58.379 07:30:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:58.379 07:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:58.379 07:30:02 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:33:58.379 07:30:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:58.379 { 00:33:58.379 "params": { 00:33:58.379 "name": "Nvme$subsystem", 00:33:58.379 "trtype": "$TEST_TRANSPORT", 00:33:58.379 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:58.379 "adrfam": "ipv4", 00:33:58.379 "trsvcid": "$NVMF_PORT", 00:33:58.379 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:58.379 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:58.379 "hdgst": ${hdgst:-false}, 00:33:58.379 "ddgst": ${ddgst:-false} 00:33:58.379 }, 00:33:58.379 "method": "bdev_nvme_attach_controller" 00:33:58.379 } 00:33:58.379 EOF 00:33:58.379 )") 00:33:58.380 07:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:33:58.380 07:30:02 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:33:58.380 07:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:58.380 07:30:02 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:33:58.380 07:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:33:58.380 07:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:58.380 07:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:33:58.380 07:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:33:58.380 07:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:33:58.380 07:30:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:33:58.380 07:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:58.380 07:30:02 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:33:58.380 07:30:02 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:33:58.380 07:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:33:58.380 07:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:33:58.380 07:30:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:33:58.380 07:30:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:33:58.380 07:30:02 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:58.380 "params": { 00:33:58.380 "name": "Nvme0", 00:33:58.380 "trtype": "tcp", 00:33:58.380 "traddr": "10.0.0.2", 00:33:58.380 "adrfam": "ipv4", 00:33:58.380 "trsvcid": "4420", 00:33:58.380 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:58.380 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:58.380 "hdgst": true, 00:33:58.380 "ddgst": true 00:33:58.380 }, 00:33:58.380 "method": "bdev_nvme_attach_controller" 00:33:58.380 }' 00:33:58.380 07:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:33:58.380 07:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:33:58.380 07:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:33:58.380 07:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:58.380 07:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:33:58.380 07:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:33:58.380 07:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:33:58.380 07:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:33:58.380 07:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:58.380 07:30:02 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:58.650 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:58.650 ... 00:33:58.650 fio-3.35 00:33:58.650 Starting 3 threads 00:34:10.921 00:34:10.921 filename0: (groupid=0, jobs=1): err= 0: pid=1454237: Wed Nov 20 07:30:13 2024 00:34:10.921 read: IOPS=287, BW=35.9MiB/s (37.7MB/s)(361MiB/10045msec) 00:34:10.921 slat (nsec): min=6478, max=32580, avg=11264.13, stdev=1688.59 00:34:10.921 clat (usec): min=7879, max=50887, avg=10410.35, stdev=1277.11 00:34:10.921 lat (usec): min=7891, max=50899, avg=10421.62, stdev=1277.10 00:34:10.921 clat percentiles (usec): 00:34:10.921 | 1.00th=[ 8586], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[ 9765], 00:34:10.921 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10421], 60.00th=[10552], 00:34:10.921 | 70.00th=[10814], 80.00th=[10945], 90.00th=[11338], 95.00th=[11600], 00:34:10.921 | 99.00th=[12256], 99.50th=[12518], 99.90th=[13435], 99.95th=[49021], 00:34:10.921 | 99.99th=[51119] 00:34:10.921 bw ( KiB/s): min=35840, max=37888, per=35.15%, avg=36928.00, stdev=615.27, samples=20 00:34:10.921 iops : min= 280, max= 296, avg=288.50, stdev= 4.81, samples=20 00:34:10.921 lat (msec) : 10=30.14%, 20=69.80%, 50=0.03%, 100=0.03% 00:34:10.921 cpu : usr=94.84%, sys=4.85%, ctx=14, majf=0, minf=87 00:34:10.921 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:10.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.921 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.921 issued rwts: total=2887,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.921 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:10.921 filename0: (groupid=0, jobs=1): err= 0: pid=1454238: Wed Nov 20 07:30:13 2024 00:34:10.921 read: IOPS=261, BW=32.7MiB/s (34.3MB/s)(328MiB/10044msec) 00:34:10.921 slat (nsec): min=6512, max=25340, avg=11585.97, stdev=1416.42 00:34:10.921 clat (usec): min=8420, max=50113, avg=11440.93, stdev=1253.79 00:34:10.921 lat (usec): min=8432, max=50121, avg=11452.51, stdev=1253.79 00:34:10.921 clat percentiles (usec): 00:34:10.921 | 1.00th=[ 9634], 5.00th=[10159], 10.00th=[10552], 20.00th=[10814], 00:34:10.921 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[11600], 00:34:10.921 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12387], 95.00th=[12649], 00:34:10.921 | 99.00th=[13304], 99.50th=[13566], 99.90th=[15139], 99.95th=[45351], 00:34:10.921 | 99.99th=[50070] 00:34:10.921 bw ( KiB/s): min=32768, max=34560, per=31.98%, avg=33600.00, stdev=461.51, samples=20 00:34:10.921 iops : min= 256, max= 270, avg=262.50, stdev= 3.61, samples=20 00:34:10.921 lat (msec) : 10=2.82%, 20=97.11%, 50=0.04%, 100=0.04% 00:34:10.921 cpu : usr=94.43%, sys=5.27%, ctx=17, majf=0, minf=84 00:34:10.921 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:10.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.921 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.921 issued rwts: total=2627,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.921 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:10.921 filename0: (groupid=0, jobs=1): err= 0: pid=1454239: Wed Nov 20 07:30:13 2024 00:34:10.921 read: IOPS=271, BW=34.0MiB/s (35.6MB/s)(341MiB/10045msec) 00:34:10.921 slat (nsec): min=6502, max=26370, avg=11395.99, stdev=1727.46 00:34:10.921 clat (usec): min=7351, max=47011, avg=11010.15, stdev=1202.39 00:34:10.921 lat (usec): min=7363, max=47022, avg=11021.55, stdev=1202.39 00:34:10.921 clat percentiles (usec): 00:34:10.921 | 1.00th=[ 9241], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10421], 00:34:10.921 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11207], 00:34:10.921 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11863], 95.00th=[12256], 00:34:10.921 | 99.00th=[12780], 99.50th=[13042], 99.90th=[13960], 99.95th=[44827], 00:34:10.921 | 99.99th=[46924] 00:34:10.921 bw ( KiB/s): min=33792, max=35584, per=33.24%, avg=34918.40, stdev=450.35, samples=20 00:34:10.921 iops : min= 264, max= 278, avg=272.80, stdev= 3.52, samples=20 00:34:10.921 lat (msec) : 10=8.50%, 20=91.43%, 50=0.07% 00:34:10.921 cpu : usr=94.90%, sys=4.80%, ctx=16, majf=0, minf=56 00:34:10.921 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:10.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.921 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:10.921 issued rwts: total=2730,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:10.921 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:10.921 00:34:10.921 Run status group 0 (all jobs): 00:34:10.921 READ: bw=103MiB/s (108MB/s), 32.7MiB/s-35.9MiB/s (34.3MB/s-37.7MB/s), io=1031MiB (1081MB), run=10044-10045msec 00:34:10.921 07:30:13 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:10.921 07:30:13 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:34:10.921 07:30:13 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:34:10.921 07:30:13 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:10.921 07:30:13 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:34:10.921 07:30:13 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:10.921 07:30:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.921 07:30:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:10.921 07:30:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.921 07:30:13 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:10.921 07:30:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:10.921 07:30:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:10.921 07:30:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:10.921 00:34:10.921 real 0m11.088s 00:34:10.921 user 0m35.427s 00:34:10.921 sys 0m1.798s 00:34:10.921 07:30:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:10.921 07:30:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:10.921 ************************************ 00:34:10.921 END TEST fio_dif_digest 00:34:10.921 ************************************ 00:34:10.921 07:30:13 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:10.921 07:30:13 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:34:10.921 07:30:13 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:10.921 07:30:13 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:34:10.921 07:30:13 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:10.921 07:30:13 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:34:10.921 07:30:13 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:10.921 07:30:13 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:10.921 rmmod nvme_tcp 00:34:10.921 rmmod nvme_fabrics 00:34:10.921 rmmod nvme_keyring 00:34:10.921 07:30:13 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:10.921 07:30:13 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:34:10.921 07:30:13 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:34:10.921 07:30:13 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 1445673 ']' 00:34:10.921 07:30:13 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 1445673 00:34:10.921 07:30:13 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 1445673 ']' 00:34:10.921 07:30:13 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 1445673 00:34:10.921 07:30:13 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:34:10.921 07:30:13 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:10.921 07:30:13 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1445673 00:34:10.921 07:30:13 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:10.921 07:30:13 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:10.921 07:30:13 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1445673' 00:34:10.921 killing process with pid 1445673 00:34:10.921 07:30:13 nvmf_dif -- common/autotest_common.sh@971 -- # kill 1445673 00:34:10.921 07:30:13 nvmf_dif -- common/autotest_common.sh@976 -- # wait 1445673 00:34:10.921 07:30:14 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:10.921 07:30:14 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:12.301 Waiting for block devices as requested 00:34:12.301 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:12.560 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:12.560 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:12.560 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:12.819 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:12.819 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:12.819 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:13.079 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:13.079 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:13.079 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:13.338 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:13.338 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:13.338 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:13.338 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:13.596 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:13.596 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:13.596 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:13.856 07:30:18 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:13.856 07:30:18 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:13.856 07:30:18 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:34:13.856 07:30:18 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:34:13.856 07:30:18 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:13.856 07:30:18 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:34:13.856 07:30:18 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:13.856 07:30:18 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:13.856 07:30:18 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:13.856 07:30:18 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:13.856 07:30:18 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:15.761 07:30:20 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:15.761 00:34:15.761 real 1m14.310s 00:34:15.761 user 7m13.058s 00:34:15.761 sys 0m19.772s 00:34:15.761 07:30:20 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:15.761 07:30:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:15.761 ************************************ 00:34:15.761 END TEST nvmf_dif 00:34:15.761 ************************************ 00:34:15.761 07:30:20 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:15.761 07:30:20 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:34:15.761 07:30:20 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:15.761 07:30:20 -- common/autotest_common.sh@10 -- # set +x 00:34:16.021 ************************************ 00:34:16.021 START TEST nvmf_abort_qd_sizes 00:34:16.021 ************************************ 00:34:16.021 07:30:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:16.021 * Looking for test storage... 00:34:16.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:16.021 07:30:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:16.021 07:30:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:34:16.021 07:30:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:16.021 07:30:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:16.021 07:30:20 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:16.021 07:30:20 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:16.021 07:30:20 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:16.021 07:30:20 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:34:16.021 07:30:20 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:34:16.021 07:30:20 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:34:16.021 07:30:20 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:34:16.021 07:30:20 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:34:16.021 07:30:20 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:34:16.021 07:30:20 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:34:16.021 07:30:20 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:16.021 07:30:20 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:34:16.021 07:30:20 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:34:16.021 07:30:20 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:16.021 07:30:20 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:16.021 07:30:20 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:34:16.021 07:30:20 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:34:16.021 07:30:20 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:16.021 07:30:20 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:34:16.021 07:30:20 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:34:16.021 07:30:20 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:34:16.021 07:30:20 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:34:16.021 07:30:20 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:16.021 07:30:20 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:34:16.021 07:30:20 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:34:16.021 07:30:20 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:16.021 07:30:20 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:16.021 07:30:20 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:34:16.021 07:30:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:16.021 07:30:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:16.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:16.021 --rc genhtml_branch_coverage=1 00:34:16.021 --rc genhtml_function_coverage=1 00:34:16.021 --rc genhtml_legend=1 00:34:16.021 --rc geninfo_all_blocks=1 00:34:16.021 --rc geninfo_unexecuted_blocks=1 00:34:16.021 00:34:16.021 ' 00:34:16.021 07:30:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:16.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:16.021 --rc genhtml_branch_coverage=1 00:34:16.021 --rc genhtml_function_coverage=1 00:34:16.022 --rc genhtml_legend=1 00:34:16.022 --rc geninfo_all_blocks=1 00:34:16.022 --rc geninfo_unexecuted_blocks=1 00:34:16.022 00:34:16.022 ' 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:16.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:16.022 --rc genhtml_branch_coverage=1 00:34:16.022 --rc genhtml_function_coverage=1 00:34:16.022 --rc genhtml_legend=1 00:34:16.022 --rc geninfo_all_blocks=1 00:34:16.022 --rc geninfo_unexecuted_blocks=1 00:34:16.022 00:34:16.022 ' 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:16.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:16.022 --rc genhtml_branch_coverage=1 00:34:16.022 --rc genhtml_function_coverage=1 00:34:16.022 --rc genhtml_legend=1 00:34:16.022 --rc geninfo_all_blocks=1 00:34:16.022 --rc geninfo_unexecuted_blocks=1 00:34:16.022 00:34:16.022 ' 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:16.022 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:34:16.022 07:30:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:22.593 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:22.593 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:34:22.593 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:22.593 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:22.593 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:22.593 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:22.593 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:22.593 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:34:22.593 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:22.593 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:34:22.593 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:34:22.593 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:34:22.593 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:34:22.593 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:34:22.593 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:34:22.593 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:22.593 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:22.593 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:22.593 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:22.593 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:22.593 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:22.594 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:22.594 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:22.594 Found net devices under 0000:86:00.0: cvl_0_0 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:22.594 Found net devices under 0000:86:00.1: cvl_0_1 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:22.594 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:22.594 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.487 ms 00:34:22.594 00:34:22.594 --- 10.0.0.2 ping statistics --- 00:34:22.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:22.594 rtt min/avg/max/mdev = 0.487/0.487/0.487/0.000 ms 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:22.594 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:22.594 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:34:22.594 00:34:22.594 --- 10.0.0.1 ping statistics --- 00:34:22.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:22.594 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:22.594 07:30:26 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:25.131 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:25.131 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:25.131 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:25.131 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:25.131 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:25.131 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:25.131 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:25.131 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:25.131 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:25.131 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:25.131 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:25.131 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:25.131 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:25.131 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:25.131 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:25.131 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:25.700 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:25.700 07:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:25.700 07:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:25.700 07:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:25.700 07:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:25.700 07:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:25.700 07:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:25.959 07:30:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:34:25.959 07:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:25.959 07:30:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:25.959 07:30:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:25.959 07:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=1462556 00:34:25.959 07:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:34:25.959 07:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 1462556 00:34:25.959 07:30:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 1462556 ']' 00:34:25.959 07:30:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:25.959 07:30:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:25.959 07:30:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:25.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:25.959 07:30:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:25.959 07:30:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:25.959 [2024-11-20 07:30:30.326917] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:34:25.959 [2024-11-20 07:30:30.326968] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:25.959 [2024-11-20 07:30:30.406173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:25.959 [2024-11-20 07:30:30.450302] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:25.959 [2024-11-20 07:30:30.450338] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:25.959 [2024-11-20 07:30:30.450346] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:25.959 [2024-11-20 07:30:30.450352] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:25.959 [2024-11-20 07:30:30.450358] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:25.959 [2024-11-20 07:30:30.451958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:25.959 [2024-11-20 07:30:30.451985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:25.959 [2024-11-20 07:30:30.452014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:25.959 [2024-11-20 07:30:30.452015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:26.344 07:30:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:26.344 07:30:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:34:26.344 07:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:26.344 07:30:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:26.344 07:30:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:26.344 07:30:30 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:26.344 07:30:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:34:26.344 07:30:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:34:26.344 07:30:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:34:26.344 07:30:30 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:34:26.344 07:30:30 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:34:26.344 07:30:30 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:34:26.344 07:30:30 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:34:26.344 07:30:30 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:34:26.344 07:30:30 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:34:26.344 07:30:30 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:34:26.344 07:30:30 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:34:26.344 07:30:30 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:34:26.344 07:30:30 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:34:26.344 07:30:30 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:34:26.344 07:30:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:34:26.344 07:30:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:34:26.344 07:30:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:34:26.344 07:30:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:34:26.344 07:30:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:26.344 07:30:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:26.344 ************************************ 00:34:26.344 START TEST spdk_target_abort 00:34:26.344 ************************************ 00:34:26.344 07:30:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:34:26.344 07:30:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:34:26.344 07:30:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:34:26.344 07:30:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:26.344 07:30:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:29.634 spdk_targetn1 00:34:29.634 07:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.634 07:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:29.634 07:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.634 07:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:29.634 [2024-11-20 07:30:33.474047] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:29.634 07:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.634 07:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:34:29.634 07:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.634 07:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:29.634 07:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.634 07:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:34:29.634 07:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.634 07:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:29.634 07:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.634 07:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:34:29.634 07:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.634 07:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:29.634 [2024-11-20 07:30:33.526376] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:29.634 07:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.634 07:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:34:29.634 07:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:29.634 07:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:29.634 07:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:34:29.634 07:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:29.634 07:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:29.634 07:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:29.634 07:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:29.634 07:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:29.634 07:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:29.634 07:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:29.634 07:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:29.634 07:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:29.634 07:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:29.634 07:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:34:29.634 07:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:29.634 07:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:29.634 07:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:29.634 07:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:29.634 07:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:29.634 07:30:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:32.921 Initializing NVMe Controllers 00:34:32.921 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:32.921 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:32.921 Initialization complete. Launching workers. 00:34:32.921 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15861, failed: 0 00:34:32.921 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1222, failed to submit 14639 00:34:32.921 success 730, unsuccessful 492, failed 0 00:34:32.921 07:30:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:32.921 07:30:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:36.212 Initializing NVMe Controllers 00:34:36.212 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:36.212 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:36.212 Initialization complete. Launching workers. 00:34:36.212 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8612, failed: 0 00:34:36.212 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1259, failed to submit 7353 00:34:36.212 success 324, unsuccessful 935, failed 0 00:34:36.212 07:30:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:36.212 07:30:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:38.749 Initializing NVMe Controllers 00:34:38.749 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:38.749 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:38.749 Initialization complete. Launching workers. 00:34:38.749 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37812, failed: 0 00:34:38.749 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2846, failed to submit 34966 00:34:38.749 success 594, unsuccessful 2252, failed 0 00:34:38.749 07:30:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:34:38.749 07:30:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.749 07:30:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:38.749 07:30:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:38.749 07:30:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:34:38.749 07:30:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:38.749 07:30:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:40.127 07:30:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:40.127 07:30:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1462556 00:34:40.127 07:30:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 1462556 ']' 00:34:40.127 07:30:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 1462556 00:34:40.127 07:30:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:34:40.127 07:30:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:40.127 07:30:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1462556 00:34:40.127 07:30:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:40.127 07:30:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:40.127 07:30:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1462556' 00:34:40.127 killing process with pid 1462556 00:34:40.127 07:30:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 1462556 00:34:40.127 07:30:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 1462556 00:34:40.387 00:34:40.387 real 0m14.099s 00:34:40.387 user 0m53.733s 00:34:40.387 sys 0m2.599s 00:34:40.387 07:30:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:40.387 07:30:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:40.387 ************************************ 00:34:40.387 END TEST spdk_target_abort 00:34:40.387 ************************************ 00:34:40.387 07:30:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:34:40.387 07:30:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:34:40.387 07:30:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:40.387 07:30:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:40.387 ************************************ 00:34:40.387 START TEST kernel_target_abort 00:34:40.387 ************************************ 00:34:40.387 07:30:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:34:40.387 07:30:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:34:40.387 07:30:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:34:40.387 07:30:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:40.387 07:30:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:40.387 07:30:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.387 07:30:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.387 07:30:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:40.387 07:30:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.387 07:30:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:40.387 07:30:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:40.387 07:30:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:40.387 07:30:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:40.387 07:30:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:40.387 07:30:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:40.387 07:30:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:40.387 07:30:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:40.388 07:30:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:40.388 07:30:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:34:40.388 07:30:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:40.388 07:30:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:40.388 07:30:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:40.388 07:30:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:43.677 Waiting for block devices as requested 00:34:43.677 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:43.677 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:43.677 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:43.677 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:43.677 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:43.677 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:43.677 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:43.677 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:43.677 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:43.935 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:43.935 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:43.935 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:44.194 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:44.194 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:44.194 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:44.453 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:44.453 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:44.453 07:30:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:44.453 07:30:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:44.453 07:30:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:44.453 07:30:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:34:44.453 07:30:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:44.453 07:30:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:34:44.453 07:30:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:44.453 07:30:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:44.453 07:30:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:44.453 No valid GPT data, bailing 00:34:44.453 07:30:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:44.453 07:30:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:34:44.453 07:30:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:34:44.453 07:30:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:44.453 07:30:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:44.453 07:30:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:44.453 07:30:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:44.712 07:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:44.713 07:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:44.713 07:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:34:44.713 07:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:44.713 07:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:34:44.713 07:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:44.713 07:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:34:44.713 07:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:34:44.713 07:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:34:44.713 07:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:44.713 07:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:44.713 00:34:44.713 Discovery Log Number of Records 2, Generation counter 2 00:34:44.713 =====Discovery Log Entry 0====== 00:34:44.713 trtype: tcp 00:34:44.713 adrfam: ipv4 00:34:44.713 subtype: current discovery subsystem 00:34:44.713 treq: not specified, sq flow control disable supported 00:34:44.713 portid: 1 00:34:44.713 trsvcid: 4420 00:34:44.713 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:44.713 traddr: 10.0.0.1 00:34:44.713 eflags: none 00:34:44.713 sectype: none 00:34:44.713 =====Discovery Log Entry 1====== 00:34:44.713 trtype: tcp 00:34:44.713 adrfam: ipv4 00:34:44.713 subtype: nvme subsystem 00:34:44.713 treq: not specified, sq flow control disable supported 00:34:44.713 portid: 1 00:34:44.713 trsvcid: 4420 00:34:44.713 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:44.713 traddr: 10.0.0.1 00:34:44.713 eflags: none 00:34:44.713 sectype: none 00:34:44.713 07:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:34:44.713 07:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:44.713 07:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:44.713 07:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:34:44.713 07:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:44.713 07:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:44.713 07:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:44.713 07:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:44.713 07:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:44.713 07:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:44.713 07:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:44.713 07:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:44.713 07:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:44.713 07:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:44.713 07:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:34:44.713 07:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:44.713 07:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:34:44.713 07:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:44.713 07:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:44.713 07:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:44.713 07:30:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:48.018 Initializing NVMe Controllers 00:34:48.018 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:48.018 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:48.018 Initialization complete. Launching workers. 00:34:48.018 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 91778, failed: 0 00:34:48.018 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 91778, failed to submit 0 00:34:48.018 success 0, unsuccessful 91778, failed 0 00:34:48.018 07:30:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:48.018 07:30:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:51.302 Initializing NVMe Controllers 00:34:51.302 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:51.302 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:51.302 Initialization complete. Launching workers. 00:34:51.302 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 146783, failed: 0 00:34:51.302 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36738, failed to submit 110045 00:34:51.302 success 0, unsuccessful 36738, failed 0 00:34:51.302 07:30:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:51.302 07:30:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:54.591 Initializing NVMe Controllers 00:34:54.591 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:54.591 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:54.591 Initialization complete. Launching workers. 00:34:54.591 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 138674, failed: 0 00:34:54.591 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34746, failed to submit 103928 00:34:54.591 success 0, unsuccessful 34746, failed 0 00:34:54.591 07:30:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:34:54.591 07:30:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:54.591 07:30:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:34:54.591 07:30:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:54.591 07:30:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:54.591 07:30:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:54.591 07:30:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:54.591 07:30:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:54.591 07:30:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:54.591 07:30:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:57.129 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:57.129 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:57.129 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:57.129 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:57.129 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:57.129 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:57.129 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:57.129 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:57.129 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:57.129 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:57.129 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:57.129 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:57.129 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:57.129 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:57.129 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:57.129 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:57.697 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:57.956 00:34:57.956 real 0m17.504s 00:34:57.956 user 0m9.136s 00:34:57.956 sys 0m5.041s 00:34:57.956 07:31:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:57.956 07:31:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:57.956 ************************************ 00:34:57.956 END TEST kernel_target_abort 00:34:57.956 ************************************ 00:34:57.956 07:31:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:34:57.956 07:31:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:34:57.956 07:31:02 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:57.956 07:31:02 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:34:57.956 07:31:02 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:57.956 07:31:02 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:34:57.956 07:31:02 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:57.956 07:31:02 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:57.956 rmmod nvme_tcp 00:34:57.956 rmmod nvme_fabrics 00:34:57.956 rmmod nvme_keyring 00:34:57.956 07:31:02 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:57.956 07:31:02 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:34:57.956 07:31:02 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:34:57.956 07:31:02 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 1462556 ']' 00:34:57.956 07:31:02 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 1462556 00:34:57.956 07:31:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 1462556 ']' 00:34:57.956 07:31:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 1462556 00:34:57.956 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (1462556) - No such process 00:34:57.956 07:31:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 1462556 is not found' 00:34:57.956 Process with pid 1462556 is not found 00:34:57.956 07:31:02 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:57.956 07:31:02 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:01.246 Waiting for block devices as requested 00:35:01.246 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:01.246 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:01.246 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:01.246 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:01.246 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:01.246 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:01.246 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:01.246 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:01.506 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:01.506 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:01.506 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:01.506 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:01.765 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:01.765 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:01.765 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:02.025 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:02.025 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:02.025 07:31:06 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:02.025 07:31:06 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:02.025 07:31:06 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:35:02.025 07:31:06 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:35:02.025 07:31:06 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:02.025 07:31:06 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:35:02.025 07:31:06 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:02.025 07:31:06 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:02.025 07:31:06 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:02.025 07:31:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:02.025 07:31:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:04.566 07:31:08 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:04.566 00:35:04.566 real 0m48.255s 00:35:04.566 user 1m7.201s 00:35:04.566 sys 0m16.425s 00:35:04.566 07:31:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:04.566 07:31:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:04.566 ************************************ 00:35:04.566 END TEST nvmf_abort_qd_sizes 00:35:04.566 ************************************ 00:35:04.566 07:31:08 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:04.566 07:31:08 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:35:04.566 07:31:08 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:04.566 07:31:08 -- common/autotest_common.sh@10 -- # set +x 00:35:04.566 ************************************ 00:35:04.566 START TEST keyring_file 00:35:04.566 ************************************ 00:35:04.566 07:31:08 keyring_file -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:04.566 * Looking for test storage... 00:35:04.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:04.566 07:31:08 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:04.566 07:31:08 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:35:04.566 07:31:08 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:04.566 07:31:08 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:04.566 07:31:08 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:04.566 07:31:08 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:04.566 07:31:08 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:04.566 07:31:08 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:35:04.566 07:31:08 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:35:04.566 07:31:08 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:35:04.566 07:31:08 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:35:04.566 07:31:08 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:35:04.566 07:31:08 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:35:04.566 07:31:08 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:35:04.566 07:31:08 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:04.566 07:31:08 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:35:04.566 07:31:08 keyring_file -- scripts/common.sh@345 -- # : 1 00:35:04.566 07:31:08 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:04.566 07:31:08 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:04.566 07:31:08 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:35:04.566 07:31:08 keyring_file -- scripts/common.sh@353 -- # local d=1 00:35:04.566 07:31:08 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:04.566 07:31:08 keyring_file -- scripts/common.sh@355 -- # echo 1 00:35:04.566 07:31:08 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:35:04.566 07:31:08 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:35:04.566 07:31:08 keyring_file -- scripts/common.sh@353 -- # local d=2 00:35:04.566 07:31:08 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:04.566 07:31:08 keyring_file -- scripts/common.sh@355 -- # echo 2 00:35:04.566 07:31:08 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:35:04.566 07:31:08 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:04.566 07:31:08 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:04.566 07:31:08 keyring_file -- scripts/common.sh@368 -- # return 0 00:35:04.566 07:31:08 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:04.566 07:31:08 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:04.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:04.566 --rc genhtml_branch_coverage=1 00:35:04.566 --rc genhtml_function_coverage=1 00:35:04.566 --rc genhtml_legend=1 00:35:04.566 --rc geninfo_all_blocks=1 00:35:04.566 --rc geninfo_unexecuted_blocks=1 00:35:04.566 00:35:04.566 ' 00:35:04.566 07:31:08 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:04.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:04.566 --rc genhtml_branch_coverage=1 00:35:04.566 --rc genhtml_function_coverage=1 00:35:04.566 --rc genhtml_legend=1 00:35:04.566 --rc geninfo_all_blocks=1 00:35:04.566 --rc geninfo_unexecuted_blocks=1 00:35:04.566 00:35:04.566 ' 00:35:04.566 07:31:08 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:04.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:04.566 --rc genhtml_branch_coverage=1 00:35:04.566 --rc genhtml_function_coverage=1 00:35:04.566 --rc genhtml_legend=1 00:35:04.566 --rc geninfo_all_blocks=1 00:35:04.566 --rc geninfo_unexecuted_blocks=1 00:35:04.566 00:35:04.566 ' 00:35:04.566 07:31:08 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:04.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:04.566 --rc genhtml_branch_coverage=1 00:35:04.566 --rc genhtml_function_coverage=1 00:35:04.566 --rc genhtml_legend=1 00:35:04.566 --rc geninfo_all_blocks=1 00:35:04.566 --rc geninfo_unexecuted_blocks=1 00:35:04.566 00:35:04.566 ' 00:35:04.566 07:31:08 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:04.566 07:31:08 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:04.566 07:31:08 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:35:04.566 07:31:08 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:04.566 07:31:08 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:04.566 07:31:08 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:04.566 07:31:08 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:04.566 07:31:08 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:04.566 07:31:08 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:04.566 07:31:08 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:04.566 07:31:08 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:04.566 07:31:08 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:04.566 07:31:08 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:04.566 07:31:08 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:04.566 07:31:08 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:04.566 07:31:08 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:04.566 07:31:08 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:04.566 07:31:08 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:04.566 07:31:08 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:04.566 07:31:08 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:04.566 07:31:08 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:35:04.566 07:31:08 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:04.566 07:31:08 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:04.566 07:31:08 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:04.566 07:31:08 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.566 07:31:08 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.566 07:31:08 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.566 07:31:08 keyring_file -- paths/export.sh@5 -- # export PATH 00:35:04.567 07:31:08 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:04.567 07:31:08 keyring_file -- nvmf/common.sh@51 -- # : 0 00:35:04.567 07:31:08 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:04.567 07:31:08 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:04.567 07:31:08 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:04.567 07:31:08 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:04.567 07:31:08 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:04.567 07:31:08 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:04.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:04.567 07:31:08 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:04.567 07:31:08 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:04.567 07:31:08 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:04.567 07:31:08 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:04.567 07:31:08 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:04.567 07:31:08 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:04.567 07:31:08 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:35:04.567 07:31:08 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:35:04.567 07:31:08 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:35:04.567 07:31:08 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:04.567 07:31:08 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:04.567 07:31:08 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:04.567 07:31:08 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:04.567 07:31:08 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:04.567 07:31:08 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:04.567 07:31:08 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.WzfVWqf6OY 00:35:04.567 07:31:08 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:04.567 07:31:08 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:04.567 07:31:08 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:04.567 07:31:08 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:04.567 07:31:08 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:04.567 07:31:08 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:04.567 07:31:08 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:04.567 07:31:08 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.WzfVWqf6OY 00:35:04.567 07:31:08 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.WzfVWqf6OY 00:35:04.567 07:31:08 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.WzfVWqf6OY 00:35:04.567 07:31:08 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:35:04.567 07:31:08 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:04.567 07:31:08 keyring_file -- keyring/common.sh@17 -- # name=key1 00:35:04.567 07:31:08 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:04.567 07:31:08 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:04.567 07:31:08 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:04.567 07:31:08 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.uRIhRV2bzC 00:35:04.567 07:31:08 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:04.567 07:31:08 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:04.567 07:31:08 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:04.567 07:31:08 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:04.567 07:31:08 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:04.567 07:31:08 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:04.567 07:31:08 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:04.567 07:31:08 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.uRIhRV2bzC 00:35:04.567 07:31:08 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.uRIhRV2bzC 00:35:04.567 07:31:08 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.uRIhRV2bzC 00:35:04.567 07:31:08 keyring_file -- keyring/file.sh@30 -- # tgtpid=1471197 00:35:04.567 07:31:08 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:04.567 07:31:08 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1471197 00:35:04.567 07:31:08 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 1471197 ']' 00:35:04.567 07:31:08 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:04.567 07:31:08 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:04.567 07:31:08 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:04.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:04.567 07:31:08 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:04.567 07:31:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:04.567 [2024-11-20 07:31:09.028035] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:35:04.567 [2024-11-20 07:31:09.028083] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1471197 ] 00:35:04.567 [2024-11-20 07:31:09.104850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:04.826 [2024-11-20 07:31:09.148174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:04.826 07:31:09 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:04.826 07:31:09 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:35:04.826 07:31:09 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:35:04.826 07:31:09 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:04.826 07:31:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:04.826 [2024-11-20 07:31:09.364004] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:05.086 null0 00:35:05.086 [2024-11-20 07:31:09.396052] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:05.086 [2024-11-20 07:31:09.396423] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:05.086 07:31:09 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:05.086 07:31:09 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:05.086 07:31:09 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:35:05.086 07:31:09 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:05.086 07:31:09 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:35:05.086 07:31:09 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:05.086 07:31:09 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:35:05.086 07:31:09 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:05.086 07:31:09 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:05.086 07:31:09 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:05.086 07:31:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:05.086 [2024-11-20 07:31:09.424119] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:35:05.086 request: 00:35:05.086 { 00:35:05.086 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:35:05.086 "secure_channel": false, 00:35:05.086 "listen_address": { 00:35:05.086 "trtype": "tcp", 00:35:05.086 "traddr": "127.0.0.1", 00:35:05.086 "trsvcid": "4420" 00:35:05.086 }, 00:35:05.086 "method": "nvmf_subsystem_add_listener", 00:35:05.086 "req_id": 1 00:35:05.086 } 00:35:05.086 Got JSON-RPC error response 00:35:05.086 response: 00:35:05.086 { 00:35:05.086 "code": -32602, 00:35:05.086 "message": "Invalid parameters" 00:35:05.086 } 00:35:05.086 07:31:09 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:05.086 07:31:09 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:35:05.086 07:31:09 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:05.086 07:31:09 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:05.086 07:31:09 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:05.086 07:31:09 keyring_file -- keyring/file.sh@47 -- # bperfpid=1471206 00:35:05.086 07:31:09 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:35:05.086 07:31:09 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1471206 /var/tmp/bperf.sock 00:35:05.086 07:31:09 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 1471206 ']' 00:35:05.086 07:31:09 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:05.086 07:31:09 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:05.086 07:31:09 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:05.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:05.086 07:31:09 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:05.086 07:31:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:05.086 [2024-11-20 07:31:09.476605] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:35:05.086 [2024-11-20 07:31:09.476656] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1471206 ] 00:35:05.086 [2024-11-20 07:31:09.550914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:05.086 [2024-11-20 07:31:09.593876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:05.345 07:31:09 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:05.345 07:31:09 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:35:05.345 07:31:09 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.WzfVWqf6OY 00:35:05.345 07:31:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.WzfVWqf6OY 00:35:05.345 07:31:09 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.uRIhRV2bzC 00:35:05.345 07:31:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.uRIhRV2bzC 00:35:05.605 07:31:10 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:35:05.605 07:31:10 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:35:05.605 07:31:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:05.605 07:31:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:05.605 07:31:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:05.864 07:31:10 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.WzfVWqf6OY == \/\t\m\p\/\t\m\p\.\W\z\f\V\W\q\f\6\O\Y ]] 00:35:05.864 07:31:10 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:35:05.864 07:31:10 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:35:05.864 07:31:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:05.864 07:31:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:05.864 07:31:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:06.123 07:31:10 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.uRIhRV2bzC == \/\t\m\p\/\t\m\p\.\u\R\I\h\R\V\2\b\z\C ]] 00:35:06.123 07:31:10 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:35:06.123 07:31:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:06.123 07:31:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:06.123 07:31:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:06.123 07:31:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:06.123 07:31:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:06.123 07:31:10 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:35:06.123 07:31:10 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:35:06.123 07:31:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:06.123 07:31:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:06.123 07:31:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:06.383 07:31:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:06.383 07:31:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:06.383 07:31:10 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:35:06.383 07:31:10 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:06.383 07:31:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:06.641 [2024-11-20 07:31:11.041849] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:06.641 nvme0n1 00:35:06.641 07:31:11 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:35:06.641 07:31:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:06.641 07:31:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:06.641 07:31:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:06.641 07:31:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:06.641 07:31:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:06.899 07:31:11 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:35:06.899 07:31:11 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:35:06.899 07:31:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:06.899 07:31:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:06.900 07:31:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:06.900 07:31:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:06.900 07:31:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:07.159 07:31:11 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:35:07.159 07:31:11 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:07.159 Running I/O for 1 seconds... 00:35:08.095 18770.00 IOPS, 73.32 MiB/s 00:35:08.095 Latency(us) 00:35:08.095 [2024-11-20T06:31:12.651Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:08.095 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:35:08.095 nvme0n1 : 1.00 18816.41 73.50 0.00 0.00 6789.98 2735.42 17210.32 00:35:08.095 [2024-11-20T06:31:12.651Z] =================================================================================================================== 00:35:08.095 [2024-11-20T06:31:12.651Z] Total : 18816.41 73.50 0.00 0.00 6789.98 2735.42 17210.32 00:35:08.095 { 00:35:08.095 "results": [ 00:35:08.095 { 00:35:08.095 "job": "nvme0n1", 00:35:08.095 "core_mask": "0x2", 00:35:08.095 "workload": "randrw", 00:35:08.095 "percentage": 50, 00:35:08.095 "status": "finished", 00:35:08.095 "queue_depth": 128, 00:35:08.095 "io_size": 4096, 00:35:08.095 "runtime": 1.004389, 00:35:08.095 "iops": 18816.414755637506, 00:35:08.095 "mibps": 73.50162013920901, 00:35:08.095 "io_failed": 0, 00:35:08.095 "io_timeout": 0, 00:35:08.095 "avg_latency_us": 6789.977430413846, 00:35:08.095 "min_latency_us": 2735.4156521739133, 00:35:08.095 "max_latency_us": 17210.32347826087 00:35:08.095 } 00:35:08.095 ], 00:35:08.095 "core_count": 1 00:35:08.095 } 00:35:08.354 07:31:12 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:08.354 07:31:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:08.354 07:31:12 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:35:08.354 07:31:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:08.354 07:31:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:08.354 07:31:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:08.354 07:31:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:08.354 07:31:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:08.614 07:31:13 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:35:08.614 07:31:13 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:35:08.614 07:31:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:08.614 07:31:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:08.614 07:31:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:08.614 07:31:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:08.614 07:31:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:08.873 07:31:13 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:35:08.873 07:31:13 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:08.873 07:31:13 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:35:08.873 07:31:13 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:08.873 07:31:13 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:35:08.873 07:31:13 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:08.873 07:31:13 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:35:08.873 07:31:13 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:08.873 07:31:13 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:08.873 07:31:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:09.132 [2024-11-20 07:31:13.448464] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:09.132 [2024-11-20 07:31:13.448592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b11f0 (107): Transport endpoint is not connected 00:35:09.132 [2024-11-20 07:31:13.449587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b11f0 (9): Bad file descriptor 00:35:09.132 [2024-11-20 07:31:13.450588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:09.132 [2024-11-20 07:31:13.450599] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:09.132 [2024-11-20 07:31:13.450606] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:09.132 [2024-11-20 07:31:13.450615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:09.132 request: 00:35:09.132 { 00:35:09.133 "name": "nvme0", 00:35:09.133 "trtype": "tcp", 00:35:09.133 "traddr": "127.0.0.1", 00:35:09.133 "adrfam": "ipv4", 00:35:09.133 "trsvcid": "4420", 00:35:09.133 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:09.133 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:09.133 "prchk_reftag": false, 00:35:09.133 "prchk_guard": false, 00:35:09.133 "hdgst": false, 00:35:09.133 "ddgst": false, 00:35:09.133 "psk": "key1", 00:35:09.133 "allow_unrecognized_csi": false, 00:35:09.133 "method": "bdev_nvme_attach_controller", 00:35:09.133 "req_id": 1 00:35:09.133 } 00:35:09.133 Got JSON-RPC error response 00:35:09.133 response: 00:35:09.133 { 00:35:09.133 "code": -5, 00:35:09.133 "message": "Input/output error" 00:35:09.133 } 00:35:09.133 07:31:13 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:35:09.133 07:31:13 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:09.133 07:31:13 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:09.133 07:31:13 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:09.133 07:31:13 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:35:09.133 07:31:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:09.133 07:31:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:09.133 07:31:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:09.133 07:31:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:09.133 07:31:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:09.392 07:31:13 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:35:09.392 07:31:13 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:35:09.392 07:31:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:09.392 07:31:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:09.392 07:31:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:09.392 07:31:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:09.392 07:31:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:09.392 07:31:13 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:35:09.392 07:31:13 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:35:09.392 07:31:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:09.652 07:31:14 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:35:09.652 07:31:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:35:09.911 07:31:14 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:35:09.911 07:31:14 keyring_file -- keyring/file.sh@78 -- # jq length 00:35:09.911 07:31:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:10.170 07:31:14 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:35:10.170 07:31:14 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.WzfVWqf6OY 00:35:10.170 07:31:14 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.WzfVWqf6OY 00:35:10.170 07:31:14 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:35:10.170 07:31:14 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.WzfVWqf6OY 00:35:10.170 07:31:14 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:35:10.170 07:31:14 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:10.170 07:31:14 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:35:10.170 07:31:14 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:10.170 07:31:14 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.WzfVWqf6OY 00:35:10.170 07:31:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.WzfVWqf6OY 00:35:10.171 [2024-11-20 07:31:14.680826] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.WzfVWqf6OY': 0100660 00:35:10.171 [2024-11-20 07:31:14.680852] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:35:10.171 request: 00:35:10.171 { 00:35:10.171 "name": "key0", 00:35:10.171 "path": "/tmp/tmp.WzfVWqf6OY", 00:35:10.171 "method": "keyring_file_add_key", 00:35:10.171 "req_id": 1 00:35:10.171 } 00:35:10.171 Got JSON-RPC error response 00:35:10.171 response: 00:35:10.171 { 00:35:10.171 "code": -1, 00:35:10.171 "message": "Operation not permitted" 00:35:10.171 } 00:35:10.171 07:31:14 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:35:10.171 07:31:14 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:10.171 07:31:14 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:10.171 07:31:14 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:10.171 07:31:14 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.WzfVWqf6OY 00:35:10.171 07:31:14 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.WzfVWqf6OY 00:35:10.171 07:31:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.WzfVWqf6OY 00:35:10.429 07:31:14 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.WzfVWqf6OY 00:35:10.429 07:31:14 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:35:10.429 07:31:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:10.429 07:31:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:10.429 07:31:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:10.429 07:31:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:10.429 07:31:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:10.688 07:31:15 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:35:10.688 07:31:15 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:10.688 07:31:15 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:35:10.688 07:31:15 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:10.688 07:31:15 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:35:10.688 07:31:15 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:10.688 07:31:15 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:35:10.688 07:31:15 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:10.689 07:31:15 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:10.689 07:31:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:10.948 [2024-11-20 07:31:15.258374] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.WzfVWqf6OY': No such file or directory 00:35:10.948 [2024-11-20 07:31:15.258399] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:35:10.948 [2024-11-20 07:31:15.258415] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:35:10.948 [2024-11-20 07:31:15.258423] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:35:10.948 [2024-11-20 07:31:15.258431] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:10.948 [2024-11-20 07:31:15.258437] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:35:10.948 request: 00:35:10.948 { 00:35:10.948 "name": "nvme0", 00:35:10.948 "trtype": "tcp", 00:35:10.948 "traddr": "127.0.0.1", 00:35:10.948 "adrfam": "ipv4", 00:35:10.948 "trsvcid": "4420", 00:35:10.948 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:10.948 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:10.948 "prchk_reftag": false, 00:35:10.948 "prchk_guard": false, 00:35:10.948 "hdgst": false, 00:35:10.948 "ddgst": false, 00:35:10.948 "psk": "key0", 00:35:10.948 "allow_unrecognized_csi": false, 00:35:10.948 "method": "bdev_nvme_attach_controller", 00:35:10.948 "req_id": 1 00:35:10.948 } 00:35:10.948 Got JSON-RPC error response 00:35:10.948 response: 00:35:10.948 { 00:35:10.948 "code": -19, 00:35:10.948 "message": "No such device" 00:35:10.949 } 00:35:10.949 07:31:15 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:35:10.949 07:31:15 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:10.949 07:31:15 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:10.949 07:31:15 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:10.949 07:31:15 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:35:10.949 07:31:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:10.949 07:31:15 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:10.949 07:31:15 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:10.949 07:31:15 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:10.949 07:31:15 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:10.949 07:31:15 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:10.949 07:31:15 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:10.949 07:31:15 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.E2ulCDUJlr 00:35:10.949 07:31:15 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:10.949 07:31:15 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:10.949 07:31:15 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:10.949 07:31:15 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:10.949 07:31:15 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:10.949 07:31:15 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:10.949 07:31:15 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:11.208 07:31:15 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.E2ulCDUJlr 00:35:11.208 07:31:15 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.E2ulCDUJlr 00:35:11.208 07:31:15 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.E2ulCDUJlr 00:35:11.208 07:31:15 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.E2ulCDUJlr 00:35:11.208 07:31:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.E2ulCDUJlr 00:35:11.208 07:31:15 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:11.208 07:31:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:11.467 nvme0n1 00:35:11.467 07:31:15 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:35:11.467 07:31:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:11.467 07:31:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:11.467 07:31:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:11.467 07:31:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:11.467 07:31:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:11.726 07:31:16 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:35:11.726 07:31:16 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:35:11.726 07:31:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:11.985 07:31:16 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:35:11.985 07:31:16 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:35:11.985 07:31:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:11.985 07:31:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:11.985 07:31:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:12.245 07:31:16 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:35:12.245 07:31:16 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:35:12.245 07:31:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:12.245 07:31:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:12.245 07:31:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:12.245 07:31:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:12.245 07:31:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:12.245 07:31:16 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:35:12.245 07:31:16 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:12.245 07:31:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:12.504 07:31:16 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:35:12.504 07:31:16 keyring_file -- keyring/file.sh@105 -- # jq length 00:35:12.504 07:31:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:12.762 07:31:17 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:35:12.762 07:31:17 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.E2ulCDUJlr 00:35:12.762 07:31:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.E2ulCDUJlr 00:35:13.021 07:31:17 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.uRIhRV2bzC 00:35:13.022 07:31:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.uRIhRV2bzC 00:35:13.022 07:31:17 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:13.022 07:31:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:13.280 nvme0n1 00:35:13.280 07:31:17 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:35:13.280 07:31:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:35:13.539 07:31:18 keyring_file -- keyring/file.sh@113 -- # config='{ 00:35:13.539 "subsystems": [ 00:35:13.539 { 00:35:13.539 "subsystem": "keyring", 00:35:13.539 "config": [ 00:35:13.539 { 00:35:13.539 "method": "keyring_file_add_key", 00:35:13.539 "params": { 00:35:13.539 "name": "key0", 00:35:13.539 "path": "/tmp/tmp.E2ulCDUJlr" 00:35:13.539 } 00:35:13.539 }, 00:35:13.539 { 00:35:13.539 "method": "keyring_file_add_key", 00:35:13.539 "params": { 00:35:13.539 "name": "key1", 00:35:13.539 "path": "/tmp/tmp.uRIhRV2bzC" 00:35:13.539 } 00:35:13.539 } 00:35:13.539 ] 00:35:13.539 }, 00:35:13.539 { 00:35:13.539 "subsystem": "iobuf", 00:35:13.539 "config": [ 00:35:13.539 { 00:35:13.539 "method": "iobuf_set_options", 00:35:13.539 "params": { 00:35:13.539 "small_pool_count": 8192, 00:35:13.539 "large_pool_count": 1024, 00:35:13.539 "small_bufsize": 8192, 00:35:13.539 "large_bufsize": 135168, 00:35:13.539 "enable_numa": false 00:35:13.539 } 00:35:13.539 } 00:35:13.539 ] 00:35:13.539 }, 00:35:13.539 { 00:35:13.539 "subsystem": "sock", 00:35:13.539 "config": [ 00:35:13.539 { 00:35:13.539 "method": "sock_set_default_impl", 00:35:13.539 "params": { 00:35:13.539 "impl_name": "posix" 00:35:13.539 } 00:35:13.539 }, 00:35:13.539 { 00:35:13.539 "method": "sock_impl_set_options", 00:35:13.539 "params": { 00:35:13.539 "impl_name": "ssl", 00:35:13.539 "recv_buf_size": 4096, 00:35:13.539 "send_buf_size": 4096, 00:35:13.539 "enable_recv_pipe": true, 00:35:13.539 "enable_quickack": false, 00:35:13.539 "enable_placement_id": 0, 00:35:13.539 "enable_zerocopy_send_server": true, 00:35:13.539 "enable_zerocopy_send_client": false, 00:35:13.539 "zerocopy_threshold": 0, 00:35:13.539 "tls_version": 0, 00:35:13.539 "enable_ktls": false 00:35:13.539 } 00:35:13.539 }, 00:35:13.539 { 00:35:13.539 "method": "sock_impl_set_options", 00:35:13.539 "params": { 00:35:13.539 "impl_name": "posix", 00:35:13.539 "recv_buf_size": 2097152, 00:35:13.539 "send_buf_size": 2097152, 00:35:13.539 "enable_recv_pipe": true, 00:35:13.539 "enable_quickack": false, 00:35:13.539 "enable_placement_id": 0, 00:35:13.539 "enable_zerocopy_send_server": true, 00:35:13.539 "enable_zerocopy_send_client": false, 00:35:13.539 "zerocopy_threshold": 0, 00:35:13.539 "tls_version": 0, 00:35:13.539 "enable_ktls": false 00:35:13.539 } 00:35:13.539 } 00:35:13.539 ] 00:35:13.539 }, 00:35:13.539 { 00:35:13.539 "subsystem": "vmd", 00:35:13.539 "config": [] 00:35:13.539 }, 00:35:13.539 { 00:35:13.539 "subsystem": "accel", 00:35:13.539 "config": [ 00:35:13.539 { 00:35:13.539 "method": "accel_set_options", 00:35:13.539 "params": { 00:35:13.539 "small_cache_size": 128, 00:35:13.539 "large_cache_size": 16, 00:35:13.539 "task_count": 2048, 00:35:13.539 "sequence_count": 2048, 00:35:13.539 "buf_count": 2048 00:35:13.539 } 00:35:13.539 } 00:35:13.539 ] 00:35:13.539 }, 00:35:13.539 { 00:35:13.539 "subsystem": "bdev", 00:35:13.539 "config": [ 00:35:13.539 { 00:35:13.539 "method": "bdev_set_options", 00:35:13.539 "params": { 00:35:13.539 "bdev_io_pool_size": 65535, 00:35:13.539 "bdev_io_cache_size": 256, 00:35:13.539 "bdev_auto_examine": true, 00:35:13.539 "iobuf_small_cache_size": 128, 00:35:13.539 "iobuf_large_cache_size": 16 00:35:13.539 } 00:35:13.539 }, 00:35:13.539 { 00:35:13.539 "method": "bdev_raid_set_options", 00:35:13.539 "params": { 00:35:13.539 "process_window_size_kb": 1024, 00:35:13.539 "process_max_bandwidth_mb_sec": 0 00:35:13.539 } 00:35:13.539 }, 00:35:13.539 { 00:35:13.539 "method": "bdev_iscsi_set_options", 00:35:13.539 "params": { 00:35:13.539 "timeout_sec": 30 00:35:13.539 } 00:35:13.539 }, 00:35:13.539 { 00:35:13.539 "method": "bdev_nvme_set_options", 00:35:13.539 "params": { 00:35:13.539 "action_on_timeout": "none", 00:35:13.539 "timeout_us": 0, 00:35:13.539 "timeout_admin_us": 0, 00:35:13.539 "keep_alive_timeout_ms": 10000, 00:35:13.539 "arbitration_burst": 0, 00:35:13.540 "low_priority_weight": 0, 00:35:13.540 "medium_priority_weight": 0, 00:35:13.540 "high_priority_weight": 0, 00:35:13.540 "nvme_adminq_poll_period_us": 10000, 00:35:13.540 "nvme_ioq_poll_period_us": 0, 00:35:13.540 "io_queue_requests": 512, 00:35:13.540 "delay_cmd_submit": true, 00:35:13.540 "transport_retry_count": 4, 00:35:13.540 "bdev_retry_count": 3, 00:35:13.540 "transport_ack_timeout": 0, 00:35:13.540 "ctrlr_loss_timeout_sec": 0, 00:35:13.540 "reconnect_delay_sec": 0, 00:35:13.540 "fast_io_fail_timeout_sec": 0, 00:35:13.540 "disable_auto_failback": false, 00:35:13.540 "generate_uuids": false, 00:35:13.540 "transport_tos": 0, 00:35:13.540 "nvme_error_stat": false, 00:35:13.540 "rdma_srq_size": 0, 00:35:13.540 "io_path_stat": false, 00:35:13.540 "allow_accel_sequence": false, 00:35:13.540 "rdma_max_cq_size": 0, 00:35:13.540 "rdma_cm_event_timeout_ms": 0, 00:35:13.540 "dhchap_digests": [ 00:35:13.540 "sha256", 00:35:13.540 "sha384", 00:35:13.540 "sha512" 00:35:13.540 ], 00:35:13.540 "dhchap_dhgroups": [ 00:35:13.540 "null", 00:35:13.540 "ffdhe2048", 00:35:13.540 "ffdhe3072", 00:35:13.540 "ffdhe4096", 00:35:13.540 "ffdhe6144", 00:35:13.540 "ffdhe8192" 00:35:13.540 ] 00:35:13.540 } 00:35:13.540 }, 00:35:13.540 { 00:35:13.540 "method": "bdev_nvme_attach_controller", 00:35:13.540 "params": { 00:35:13.540 "name": "nvme0", 00:35:13.540 "trtype": "TCP", 00:35:13.540 "adrfam": "IPv4", 00:35:13.540 "traddr": "127.0.0.1", 00:35:13.540 "trsvcid": "4420", 00:35:13.540 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:13.540 "prchk_reftag": false, 00:35:13.540 "prchk_guard": false, 00:35:13.540 "ctrlr_loss_timeout_sec": 0, 00:35:13.540 "reconnect_delay_sec": 0, 00:35:13.540 "fast_io_fail_timeout_sec": 0, 00:35:13.540 "psk": "key0", 00:35:13.540 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:13.540 "hdgst": false, 00:35:13.540 "ddgst": false, 00:35:13.540 "multipath": "multipath" 00:35:13.540 } 00:35:13.540 }, 00:35:13.540 { 00:35:13.540 "method": "bdev_nvme_set_hotplug", 00:35:13.540 "params": { 00:35:13.540 "period_us": 100000, 00:35:13.540 "enable": false 00:35:13.540 } 00:35:13.540 }, 00:35:13.540 { 00:35:13.540 "method": "bdev_wait_for_examine" 00:35:13.540 } 00:35:13.540 ] 00:35:13.540 }, 00:35:13.540 { 00:35:13.540 "subsystem": "nbd", 00:35:13.540 "config": [] 00:35:13.540 } 00:35:13.540 ] 00:35:13.540 }' 00:35:13.540 07:31:18 keyring_file -- keyring/file.sh@115 -- # killprocess 1471206 00:35:13.540 07:31:18 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 1471206 ']' 00:35:13.540 07:31:18 keyring_file -- common/autotest_common.sh@956 -- # kill -0 1471206 00:35:13.540 07:31:18 keyring_file -- common/autotest_common.sh@957 -- # uname 00:35:13.540 07:31:18 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:13.540 07:31:18 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1471206 00:35:13.800 07:31:18 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:35:13.800 07:31:18 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:35:13.800 07:31:18 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1471206' 00:35:13.800 killing process with pid 1471206 00:35:13.800 07:31:18 keyring_file -- common/autotest_common.sh@971 -- # kill 1471206 00:35:13.800 Received shutdown signal, test time was about 1.000000 seconds 00:35:13.800 00:35:13.800 Latency(us) 00:35:13.800 [2024-11-20T06:31:18.356Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:13.800 [2024-11-20T06:31:18.356Z] =================================================================================================================== 00:35:13.800 [2024-11-20T06:31:18.356Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:13.800 07:31:18 keyring_file -- common/autotest_common.sh@976 -- # wait 1471206 00:35:13.800 07:31:18 keyring_file -- keyring/file.sh@118 -- # bperfpid=1472723 00:35:13.800 07:31:18 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1472723 /var/tmp/bperf.sock 00:35:13.800 07:31:18 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 1472723 ']' 00:35:13.800 07:31:18 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:13.800 07:31:18 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:35:13.800 07:31:18 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:13.800 07:31:18 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:13.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:13.800 07:31:18 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:35:13.800 "subsystems": [ 00:35:13.800 { 00:35:13.800 "subsystem": "keyring", 00:35:13.800 "config": [ 00:35:13.800 { 00:35:13.800 "method": "keyring_file_add_key", 00:35:13.800 "params": { 00:35:13.800 "name": "key0", 00:35:13.800 "path": "/tmp/tmp.E2ulCDUJlr" 00:35:13.800 } 00:35:13.800 }, 00:35:13.800 { 00:35:13.800 "method": "keyring_file_add_key", 00:35:13.800 "params": { 00:35:13.800 "name": "key1", 00:35:13.800 "path": "/tmp/tmp.uRIhRV2bzC" 00:35:13.800 } 00:35:13.800 } 00:35:13.800 ] 00:35:13.800 }, 00:35:13.800 { 00:35:13.800 "subsystem": "iobuf", 00:35:13.800 "config": [ 00:35:13.800 { 00:35:13.800 "method": "iobuf_set_options", 00:35:13.800 "params": { 00:35:13.800 "small_pool_count": 8192, 00:35:13.800 "large_pool_count": 1024, 00:35:13.800 "small_bufsize": 8192, 00:35:13.800 "large_bufsize": 135168, 00:35:13.800 "enable_numa": false 00:35:13.800 } 00:35:13.800 } 00:35:13.800 ] 00:35:13.800 }, 00:35:13.800 { 00:35:13.800 "subsystem": "sock", 00:35:13.800 "config": [ 00:35:13.800 { 00:35:13.800 "method": "sock_set_default_impl", 00:35:13.800 "params": { 00:35:13.800 "impl_name": "posix" 00:35:13.800 } 00:35:13.800 }, 00:35:13.800 { 00:35:13.800 "method": "sock_impl_set_options", 00:35:13.800 "params": { 00:35:13.800 "impl_name": "ssl", 00:35:13.800 "recv_buf_size": 4096, 00:35:13.800 "send_buf_size": 4096, 00:35:13.800 "enable_recv_pipe": true, 00:35:13.800 "enable_quickack": false, 00:35:13.800 "enable_placement_id": 0, 00:35:13.800 "enable_zerocopy_send_server": true, 00:35:13.800 "enable_zerocopy_send_client": false, 00:35:13.800 "zerocopy_threshold": 0, 00:35:13.800 "tls_version": 0, 00:35:13.800 "enable_ktls": false 00:35:13.800 } 00:35:13.800 }, 00:35:13.800 { 00:35:13.800 "method": "sock_impl_set_options", 00:35:13.800 "params": { 00:35:13.800 "impl_name": "posix", 00:35:13.800 "recv_buf_size": 2097152, 00:35:13.800 "send_buf_size": 2097152, 00:35:13.800 "enable_recv_pipe": true, 00:35:13.800 "enable_quickack": false, 00:35:13.800 "enable_placement_id": 0, 00:35:13.800 "enable_zerocopy_send_server": true, 00:35:13.800 "enable_zerocopy_send_client": false, 00:35:13.800 "zerocopy_threshold": 0, 00:35:13.800 "tls_version": 0, 00:35:13.800 "enable_ktls": false 00:35:13.800 } 00:35:13.800 } 00:35:13.800 ] 00:35:13.800 }, 00:35:13.800 { 00:35:13.800 "subsystem": "vmd", 00:35:13.800 "config": [] 00:35:13.800 }, 00:35:13.800 { 00:35:13.800 "subsystem": "accel", 00:35:13.800 "config": [ 00:35:13.800 { 00:35:13.800 "method": "accel_set_options", 00:35:13.800 "params": { 00:35:13.800 "small_cache_size": 128, 00:35:13.800 "large_cache_size": 16, 00:35:13.800 "task_count": 2048, 00:35:13.800 "sequence_count": 2048, 00:35:13.800 "buf_count": 2048 00:35:13.800 } 00:35:13.800 } 00:35:13.800 ] 00:35:13.800 }, 00:35:13.800 { 00:35:13.800 "subsystem": "bdev", 00:35:13.800 "config": [ 00:35:13.800 { 00:35:13.800 "method": "bdev_set_options", 00:35:13.800 "params": { 00:35:13.800 "bdev_io_pool_size": 65535, 00:35:13.800 "bdev_io_cache_size": 256, 00:35:13.800 "bdev_auto_examine": true, 00:35:13.800 "iobuf_small_cache_size": 128, 00:35:13.800 "iobuf_large_cache_size": 16 00:35:13.800 } 00:35:13.800 }, 00:35:13.800 { 00:35:13.800 "method": "bdev_raid_set_options", 00:35:13.800 "params": { 00:35:13.800 "process_window_size_kb": 1024, 00:35:13.800 "process_max_bandwidth_mb_sec": 0 00:35:13.800 } 00:35:13.800 }, 00:35:13.800 { 00:35:13.800 "method": "bdev_iscsi_set_options", 00:35:13.800 "params": { 00:35:13.800 "timeout_sec": 30 00:35:13.800 } 00:35:13.800 }, 00:35:13.800 { 00:35:13.800 "method": "bdev_nvme_set_options", 00:35:13.800 "params": { 00:35:13.800 "action_on_timeout": "none", 00:35:13.800 "timeout_us": 0, 00:35:13.800 "timeout_admin_us": 0, 00:35:13.800 "keep_alive_timeout_ms": 10000, 00:35:13.800 "arbitration_burst": 0, 00:35:13.800 "low_priority_weight": 0, 00:35:13.800 "medium_priority_weight": 0, 00:35:13.800 "high_priority_weight": 0, 00:35:13.800 "nvme_adminq_poll_period_us": 10000, 00:35:13.800 "nvme_ioq_poll_period_us": 0, 00:35:13.800 "io_queue_requests": 512, 00:35:13.800 "delay_cmd_submit": true, 00:35:13.800 "transport_retry_count": 4, 00:35:13.800 "bdev_retry_count": 3, 00:35:13.800 "transport_ack_timeout": 0, 00:35:13.800 "ctrlr_loss_timeout_sec": 0, 00:35:13.800 "reconnect_delay_sec": 0, 00:35:13.800 "fast_io_fail_timeout_sec": 0, 00:35:13.800 "disable_auto_failback": false, 00:35:13.800 "generate_uuids": false, 00:35:13.800 "transport_tos": 0, 00:35:13.800 "nvme_error_stat": false, 00:35:13.800 "rdma_srq_size": 0, 00:35:13.800 "io_path_stat": false, 00:35:13.800 "allow_accel_sequence": false, 00:35:13.800 "rdma_max_cq_size": 0, 00:35:13.800 "rdma_cm_event_timeout_ms": 0, 00:35:13.800 "dhchap_digests": [ 00:35:13.800 "sha256", 00:35:13.800 "sha384", 00:35:13.800 "sha512" 00:35:13.800 ], 00:35:13.800 "dhchap_dhgroups": [ 00:35:13.800 "null", 00:35:13.800 "ffdhe2048", 00:35:13.800 "ffdhe3072", 00:35:13.800 "ffdhe4096", 00:35:13.801 "ffdhe6144", 00:35:13.801 "ffdhe8192" 00:35:13.801 ] 00:35:13.801 } 00:35:13.801 }, 00:35:13.801 { 00:35:13.801 "method": "bdev_nvme_attach_controller", 00:35:13.801 "params": { 00:35:13.801 "name": "nvme0", 00:35:13.801 "trtype": "TCP", 00:35:13.801 "adrfam": "IPv4", 00:35:13.801 "traddr": "127.0.0.1", 00:35:13.801 "trsvcid": "4420", 00:35:13.801 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:13.801 "prchk_reftag": false, 00:35:13.801 "prchk_guard": false, 00:35:13.801 "ctrlr_loss_timeout_sec": 0, 00:35:13.801 "reconnect_delay_sec": 0, 00:35:13.801 "fast_io_fail_timeout_sec": 0, 00:35:13.801 "psk": "key0", 00:35:13.801 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:13.801 "hdgst": false, 00:35:13.801 "ddgst": false, 00:35:13.801 "multipath": "multipath" 00:35:13.801 } 00:35:13.801 }, 00:35:13.801 { 00:35:13.801 "method": "bdev_nvme_set_hotplug", 00:35:13.801 "params": { 00:35:13.801 "period_us": 100000, 00:35:13.801 "enable": false 00:35:13.801 } 00:35:13.801 }, 00:35:13.801 { 00:35:13.801 "method": "bdev_wait_for_examine" 00:35:13.801 } 00:35:13.801 ] 00:35:13.801 }, 00:35:13.801 { 00:35:13.801 "subsystem": "nbd", 00:35:13.801 "config": [] 00:35:13.801 } 00:35:13.801 ] 00:35:13.801 }' 00:35:13.801 07:31:18 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:13.801 07:31:18 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:13.801 [2024-11-20 07:31:18.302825] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:35:13.801 [2024-11-20 07:31:18.302876] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1472723 ] 00:35:14.060 [2024-11-20 07:31:18.376493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:14.060 [2024-11-20 07:31:18.414859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:14.060 [2024-11-20 07:31:18.578189] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:14.628 07:31:19 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:14.628 07:31:19 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:35:14.628 07:31:19 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:35:14.628 07:31:19 keyring_file -- keyring/file.sh@121 -- # jq length 00:35:14.628 07:31:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:14.886 07:31:19 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:35:14.886 07:31:19 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:35:14.886 07:31:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:14.886 07:31:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:14.886 07:31:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:14.886 07:31:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:14.886 07:31:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:15.145 07:31:19 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:35:15.145 07:31:19 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:35:15.145 07:31:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:15.145 07:31:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:15.145 07:31:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:15.145 07:31:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:15.145 07:31:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:15.404 07:31:19 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:35:15.404 07:31:19 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:35:15.404 07:31:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:35:15.404 07:31:19 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:35:15.404 07:31:19 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:35:15.404 07:31:19 keyring_file -- keyring/file.sh@1 -- # cleanup 00:35:15.404 07:31:19 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.E2ulCDUJlr /tmp/tmp.uRIhRV2bzC 00:35:15.404 07:31:19 keyring_file -- keyring/file.sh@20 -- # killprocess 1472723 00:35:15.404 07:31:19 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 1472723 ']' 00:35:15.404 07:31:19 keyring_file -- common/autotest_common.sh@956 -- # kill -0 1472723 00:35:15.404 07:31:19 keyring_file -- common/autotest_common.sh@957 -- # uname 00:35:15.404 07:31:19 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:15.404 07:31:19 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1472723 00:35:15.663 07:31:19 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:35:15.663 07:31:19 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:35:15.663 07:31:19 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1472723' 00:35:15.663 killing process with pid 1472723 00:35:15.663 07:31:19 keyring_file -- common/autotest_common.sh@971 -- # kill 1472723 00:35:15.663 Received shutdown signal, test time was about 1.000000 seconds 00:35:15.663 00:35:15.663 Latency(us) 00:35:15.663 [2024-11-20T06:31:20.219Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:15.663 [2024-11-20T06:31:20.219Z] =================================================================================================================== 00:35:15.663 [2024-11-20T06:31:20.219Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:15.663 07:31:19 keyring_file -- common/autotest_common.sh@976 -- # wait 1472723 00:35:15.663 07:31:20 keyring_file -- keyring/file.sh@21 -- # killprocess 1471197 00:35:15.663 07:31:20 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 1471197 ']' 00:35:15.663 07:31:20 keyring_file -- common/autotest_common.sh@956 -- # kill -0 1471197 00:35:15.663 07:31:20 keyring_file -- common/autotest_common.sh@957 -- # uname 00:35:15.663 07:31:20 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:15.663 07:31:20 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1471197 00:35:15.663 07:31:20 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:15.663 07:31:20 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:15.663 07:31:20 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1471197' 00:35:15.663 killing process with pid 1471197 00:35:15.663 07:31:20 keyring_file -- common/autotest_common.sh@971 -- # kill 1471197 00:35:15.663 07:31:20 keyring_file -- common/autotest_common.sh@976 -- # wait 1471197 00:35:16.231 00:35:16.231 real 0m11.853s 00:35:16.231 user 0m29.493s 00:35:16.231 sys 0m2.668s 00:35:16.231 07:31:20 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:16.231 07:31:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:16.231 ************************************ 00:35:16.231 END TEST keyring_file 00:35:16.231 ************************************ 00:35:16.231 07:31:20 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:35:16.231 07:31:20 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:16.231 07:31:20 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:35:16.231 07:31:20 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:16.231 07:31:20 -- common/autotest_common.sh@10 -- # set +x 00:35:16.231 ************************************ 00:35:16.231 START TEST keyring_linux 00:35:16.231 ************************************ 00:35:16.231 07:31:20 keyring_linux -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:16.231 Joined session keyring: 829246180 00:35:16.231 * Looking for test storage... 00:35:16.231 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:16.231 07:31:20 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:16.231 07:31:20 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:35:16.231 07:31:20 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:16.231 07:31:20 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:16.231 07:31:20 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:16.231 07:31:20 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:16.231 07:31:20 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:16.231 07:31:20 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:35:16.231 07:31:20 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:35:16.231 07:31:20 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:35:16.231 07:31:20 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:35:16.231 07:31:20 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:35:16.231 07:31:20 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:35:16.231 07:31:20 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:35:16.231 07:31:20 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:16.231 07:31:20 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:35:16.231 07:31:20 keyring_linux -- scripts/common.sh@345 -- # : 1 00:35:16.231 07:31:20 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:16.231 07:31:20 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:16.231 07:31:20 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:35:16.231 07:31:20 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:35:16.231 07:31:20 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:16.231 07:31:20 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:35:16.231 07:31:20 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:35:16.231 07:31:20 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:35:16.231 07:31:20 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:35:16.231 07:31:20 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:16.231 07:31:20 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:35:16.231 07:31:20 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:35:16.231 07:31:20 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:16.232 07:31:20 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:16.232 07:31:20 keyring_linux -- scripts/common.sh@368 -- # return 0 00:35:16.232 07:31:20 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:16.232 07:31:20 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:16.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.232 --rc genhtml_branch_coverage=1 00:35:16.232 --rc genhtml_function_coverage=1 00:35:16.232 --rc genhtml_legend=1 00:35:16.232 --rc geninfo_all_blocks=1 00:35:16.232 --rc geninfo_unexecuted_blocks=1 00:35:16.232 00:35:16.232 ' 00:35:16.232 07:31:20 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:16.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.232 --rc genhtml_branch_coverage=1 00:35:16.232 --rc genhtml_function_coverage=1 00:35:16.232 --rc genhtml_legend=1 00:35:16.232 --rc geninfo_all_blocks=1 00:35:16.232 --rc geninfo_unexecuted_blocks=1 00:35:16.232 00:35:16.232 ' 00:35:16.232 07:31:20 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:16.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.232 --rc genhtml_branch_coverage=1 00:35:16.232 --rc genhtml_function_coverage=1 00:35:16.232 --rc genhtml_legend=1 00:35:16.232 --rc geninfo_all_blocks=1 00:35:16.232 --rc geninfo_unexecuted_blocks=1 00:35:16.232 00:35:16.232 ' 00:35:16.232 07:31:20 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:16.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.232 --rc genhtml_branch_coverage=1 00:35:16.232 --rc genhtml_function_coverage=1 00:35:16.232 --rc genhtml_legend=1 00:35:16.232 --rc geninfo_all_blocks=1 00:35:16.232 --rc geninfo_unexecuted_blocks=1 00:35:16.232 00:35:16.232 ' 00:35:16.232 07:31:20 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:16.232 07:31:20 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:16.232 07:31:20 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:35:16.232 07:31:20 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:16.232 07:31:20 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:16.232 07:31:20 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:16.232 07:31:20 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:16.232 07:31:20 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:16.232 07:31:20 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:16.232 07:31:20 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:16.232 07:31:20 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:16.232 07:31:20 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:16.232 07:31:20 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:16.491 07:31:20 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:16.491 07:31:20 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:16.491 07:31:20 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:16.491 07:31:20 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:16.491 07:31:20 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:16.491 07:31:20 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:16.491 07:31:20 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:16.491 07:31:20 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:35:16.491 07:31:20 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:16.491 07:31:20 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:16.491 07:31:20 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:16.491 07:31:20 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.491 07:31:20 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.491 07:31:20 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.491 07:31:20 keyring_linux -- paths/export.sh@5 -- # export PATH 00:35:16.491 07:31:20 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.491 07:31:20 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:35:16.491 07:31:20 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:16.491 07:31:20 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:16.491 07:31:20 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:16.491 07:31:20 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:16.491 07:31:20 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:16.491 07:31:20 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:16.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:16.491 07:31:20 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:16.491 07:31:20 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:16.491 07:31:20 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:16.491 07:31:20 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:16.491 07:31:20 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:16.491 07:31:20 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:16.491 07:31:20 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:35:16.491 07:31:20 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:35:16.491 07:31:20 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:35:16.492 07:31:20 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:35:16.492 07:31:20 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:16.492 07:31:20 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:35:16.492 07:31:20 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:16.492 07:31:20 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:16.492 07:31:20 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:35:16.492 07:31:20 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:16.492 07:31:20 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:16.492 07:31:20 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:16.492 07:31:20 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:16.492 07:31:20 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:16.492 07:31:20 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:16.492 07:31:20 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:16.492 07:31:20 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:35:16.492 07:31:20 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:35:16.492 /tmp/:spdk-test:key0 00:35:16.492 07:31:20 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:35:16.492 07:31:20 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:16.492 07:31:20 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:35:16.492 07:31:20 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:16.492 07:31:20 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:16.492 07:31:20 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:35:16.492 07:31:20 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:16.492 07:31:20 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:16.492 07:31:20 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:16.492 07:31:20 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:16.492 07:31:20 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:16.492 07:31:20 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:16.492 07:31:20 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:16.492 07:31:20 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:35:16.492 07:31:20 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:35:16.492 /tmp/:spdk-test:key1 00:35:16.492 07:31:20 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:16.492 07:31:20 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1473280 00:35:16.492 07:31:20 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1473280 00:35:16.492 07:31:20 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 1473280 ']' 00:35:16.492 07:31:20 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:16.492 07:31:20 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:16.492 07:31:20 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:16.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:16.492 07:31:20 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:16.492 07:31:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:16.492 [2024-11-20 07:31:20.925149] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:35:16.492 [2024-11-20 07:31:20.925197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1473280 ] 00:35:16.492 [2024-11-20 07:31:20.997401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:16.751 [2024-11-20 07:31:21.040581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:16.751 07:31:21 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:16.751 07:31:21 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:35:16.751 07:31:21 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:35:16.751 07:31:21 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:16.751 07:31:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:16.751 [2024-11-20 07:31:21.259810] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:16.751 null0 00:35:16.751 [2024-11-20 07:31:21.291867] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:16.751 [2024-11-20 07:31:21.292237] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:17.009 07:31:21 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:17.009 07:31:21 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:35:17.009 636151138 00:35:17.009 07:31:21 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:35:17.009 356479062 00:35:17.009 07:31:21 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1473291 00:35:17.009 07:31:21 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1473291 /var/tmp/bperf.sock 00:35:17.009 07:31:21 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:35:17.009 07:31:21 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 1473291 ']' 00:35:17.009 07:31:21 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:17.009 07:31:21 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:17.009 07:31:21 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:17.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:17.009 07:31:21 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:17.009 07:31:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:17.009 [2024-11-20 07:31:21.365170] Starting SPDK v25.01-pre git sha1 6745f139b / DPDK 24.03.0 initialization... 00:35:17.009 [2024-11-20 07:31:21.365217] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1473291 ] 00:35:17.009 [2024-11-20 07:31:21.440489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:17.009 [2024-11-20 07:31:21.484114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:17.009 07:31:21 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:17.009 07:31:21 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:35:17.009 07:31:21 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:35:17.009 07:31:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:35:17.268 07:31:21 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:35:17.268 07:31:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:17.527 07:31:21 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:17.527 07:31:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:17.786 [2024-11-20 07:31:22.121500] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:17.786 nvme0n1 00:35:17.786 07:31:22 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:35:17.786 07:31:22 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:35:17.786 07:31:22 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:17.786 07:31:22 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:17.786 07:31:22 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:17.786 07:31:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:18.045 07:31:22 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:35:18.045 07:31:22 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:18.045 07:31:22 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:35:18.045 07:31:22 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:35:18.045 07:31:22 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:18.045 07:31:22 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:35:18.045 07:31:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:18.305 07:31:22 keyring_linux -- keyring/linux.sh@25 -- # sn=636151138 00:35:18.305 07:31:22 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:35:18.305 07:31:22 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:18.305 07:31:22 keyring_linux -- keyring/linux.sh@26 -- # [[ 636151138 == \6\3\6\1\5\1\1\3\8 ]] 00:35:18.305 07:31:22 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 636151138 00:35:18.305 07:31:22 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:35:18.305 07:31:22 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:18.305 Running I/O for 1 seconds... 00:35:19.242 21227.00 IOPS, 82.92 MiB/s 00:35:19.242 Latency(us) 00:35:19.242 [2024-11-20T06:31:23.798Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:19.242 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:19.242 nvme0n1 : 1.01 21226.90 82.92 0.00 0.00 6009.96 1951.83 7351.43 00:35:19.242 [2024-11-20T06:31:23.798Z] =================================================================================================================== 00:35:19.242 [2024-11-20T06:31:23.798Z] Total : 21226.90 82.92 0.00 0.00 6009.96 1951.83 7351.43 00:35:19.242 { 00:35:19.242 "results": [ 00:35:19.242 { 00:35:19.242 "job": "nvme0n1", 00:35:19.242 "core_mask": "0x2", 00:35:19.242 "workload": "randread", 00:35:19.242 "status": "finished", 00:35:19.242 "queue_depth": 128, 00:35:19.242 "io_size": 4096, 00:35:19.242 "runtime": 1.006035, 00:35:19.242 "iops": 21226.89568454378, 00:35:19.242 "mibps": 82.91756126774914, 00:35:19.242 "io_failed": 0, 00:35:19.242 "io_timeout": 0, 00:35:19.242 "avg_latency_us": 6009.960023047244, 00:35:19.242 "min_latency_us": 1951.8330434782608, 00:35:19.242 "max_latency_us": 7351.429565217391 00:35:19.242 } 00:35:19.242 ], 00:35:19.242 "core_count": 1 00:35:19.242 } 00:35:19.242 07:31:23 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:19.242 07:31:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:19.501 07:31:23 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:35:19.501 07:31:23 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:35:19.501 07:31:23 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:19.501 07:31:23 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:19.501 07:31:23 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:19.501 07:31:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:19.760 07:31:24 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:35:19.760 07:31:24 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:19.760 07:31:24 keyring_linux -- keyring/linux.sh@23 -- # return 00:35:19.760 07:31:24 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:19.760 07:31:24 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:35:19.760 07:31:24 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:19.760 07:31:24 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:35:19.760 07:31:24 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:19.760 07:31:24 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:35:19.760 07:31:24 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:19.760 07:31:24 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:19.760 07:31:24 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:20.020 [2024-11-20 07:31:24.332831] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:20.020 [2024-11-20 07:31:24.333555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15cbf60 (107): Transport endpoint is not connected 00:35:20.020 [2024-11-20 07:31:24.334550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15cbf60 (9): Bad file descriptor 00:35:20.020 [2024-11-20 07:31:24.335551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:20.020 [2024-11-20 07:31:24.335562] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:20.020 [2024-11-20 07:31:24.335570] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:20.020 [2024-11-20 07:31:24.335580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:20.020 request: 00:35:20.020 { 00:35:20.020 "name": "nvme0", 00:35:20.020 "trtype": "tcp", 00:35:20.020 "traddr": "127.0.0.1", 00:35:20.020 "adrfam": "ipv4", 00:35:20.020 "trsvcid": "4420", 00:35:20.020 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:20.020 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:20.020 "prchk_reftag": false, 00:35:20.020 "prchk_guard": false, 00:35:20.020 "hdgst": false, 00:35:20.020 "ddgst": false, 00:35:20.020 "psk": ":spdk-test:key1", 00:35:20.020 "allow_unrecognized_csi": false, 00:35:20.020 "method": "bdev_nvme_attach_controller", 00:35:20.020 "req_id": 1 00:35:20.020 } 00:35:20.020 Got JSON-RPC error response 00:35:20.020 response: 00:35:20.020 { 00:35:20.020 "code": -5, 00:35:20.020 "message": "Input/output error" 00:35:20.020 } 00:35:20.020 07:31:24 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:35:20.020 07:31:24 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:20.020 07:31:24 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:20.020 07:31:24 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:20.020 07:31:24 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:35:20.020 07:31:24 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:20.020 07:31:24 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:35:20.020 07:31:24 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:35:20.020 07:31:24 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:35:20.020 07:31:24 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:20.020 07:31:24 keyring_linux -- keyring/linux.sh@33 -- # sn=636151138 00:35:20.020 07:31:24 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 636151138 00:35:20.020 1 links removed 00:35:20.020 07:31:24 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:20.020 07:31:24 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:35:20.020 07:31:24 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:35:20.020 07:31:24 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:35:20.020 07:31:24 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:35:20.020 07:31:24 keyring_linux -- keyring/linux.sh@33 -- # sn=356479062 00:35:20.020 07:31:24 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 356479062 00:35:20.020 1 links removed 00:35:20.020 07:31:24 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1473291 00:35:20.020 07:31:24 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 1473291 ']' 00:35:20.020 07:31:24 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 1473291 00:35:20.020 07:31:24 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:35:20.020 07:31:24 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:20.020 07:31:24 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1473291 00:35:20.020 07:31:24 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:35:20.020 07:31:24 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:35:20.020 07:31:24 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1473291' 00:35:20.020 killing process with pid 1473291 00:35:20.020 07:31:24 keyring_linux -- common/autotest_common.sh@971 -- # kill 1473291 00:35:20.020 Received shutdown signal, test time was about 1.000000 seconds 00:35:20.020 00:35:20.021 Latency(us) 00:35:20.021 [2024-11-20T06:31:24.577Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:20.021 [2024-11-20T06:31:24.577Z] =================================================================================================================== 00:35:20.021 [2024-11-20T06:31:24.577Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:20.021 07:31:24 keyring_linux -- common/autotest_common.sh@976 -- # wait 1473291 00:35:20.280 07:31:24 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1473280 00:35:20.280 07:31:24 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 1473280 ']' 00:35:20.280 07:31:24 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 1473280 00:35:20.280 07:31:24 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:35:20.280 07:31:24 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:20.280 07:31:24 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1473280 00:35:20.280 07:31:24 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:20.280 07:31:24 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:20.280 07:31:24 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1473280' 00:35:20.280 killing process with pid 1473280 00:35:20.280 07:31:24 keyring_linux -- common/autotest_common.sh@971 -- # kill 1473280 00:35:20.280 07:31:24 keyring_linux -- common/autotest_common.sh@976 -- # wait 1473280 00:35:20.539 00:35:20.539 real 0m4.346s 00:35:20.539 user 0m8.232s 00:35:20.539 sys 0m1.407s 00:35:20.539 07:31:24 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:20.539 07:31:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:20.539 ************************************ 00:35:20.539 END TEST keyring_linux 00:35:20.539 ************************************ 00:35:20.539 07:31:24 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:35:20.539 07:31:24 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:35:20.539 07:31:24 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:35:20.539 07:31:24 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:35:20.539 07:31:24 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:35:20.539 07:31:24 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:35:20.539 07:31:24 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:35:20.540 07:31:24 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:35:20.540 07:31:24 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:20.540 07:31:24 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:35:20.540 07:31:24 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:20.540 07:31:24 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:35:20.540 07:31:24 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:20.540 07:31:24 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:20.540 07:31:24 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:35:20.540 07:31:24 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:35:20.540 07:31:24 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:35:20.540 07:31:24 -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:20.540 07:31:24 -- common/autotest_common.sh@10 -- # set +x 00:35:20.540 07:31:24 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:35:20.540 07:31:24 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:35:20.540 07:31:24 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:35:20.540 07:31:24 -- common/autotest_common.sh@10 -- # set +x 00:35:25.819 INFO: APP EXITING 00:35:25.819 INFO: killing all VMs 00:35:25.819 INFO: killing vhost app 00:35:25.819 INFO: EXIT DONE 00:35:28.356 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:35:28.356 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:35:28.356 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:35:28.356 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:35:28.356 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:35:28.356 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:35:28.356 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:35:28.356 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:35:28.357 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:35:28.357 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:35:28.357 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:35:28.357 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:35:28.357 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:35:28.357 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:35:28.357 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:35:28.357 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:35:28.357 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:35:31.649 Cleaning 00:35:31.649 Removing: /var/run/dpdk/spdk0/config 00:35:31.649 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:31.649 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:31.649 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:31.649 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:31.649 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:35:31.649 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:35:31.649 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:35:31.649 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:35:31.649 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:31.649 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:31.649 Removing: /var/run/dpdk/spdk1/config 00:35:31.649 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:35:31.649 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:35:31.649 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:35:31.649 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:35:31.649 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:35:31.649 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:35:31.649 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:35:31.649 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:35:31.649 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:35:31.649 Removing: /var/run/dpdk/spdk1/hugepage_info 00:35:31.649 Removing: /var/run/dpdk/spdk2/config 00:35:31.649 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:35:31.649 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:35:31.649 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:35:31.649 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:35:31.649 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:35:31.649 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:35:31.649 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:35:31.649 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:35:31.649 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:35:31.649 Removing: /var/run/dpdk/spdk2/hugepage_info 00:35:31.649 Removing: /var/run/dpdk/spdk3/config 00:35:31.649 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:35:31.649 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:35:31.649 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:35:31.649 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:35:31.649 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:35:31.649 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:35:31.649 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:35:31.649 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:35:31.649 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:35:31.649 Removing: /var/run/dpdk/spdk3/hugepage_info 00:35:31.649 Removing: /var/run/dpdk/spdk4/config 00:35:31.649 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:35:31.649 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:35:31.649 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:35:31.649 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:35:31.649 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:35:31.649 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:35:31.649 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:35:31.649 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:35:31.649 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:35:31.649 Removing: /var/run/dpdk/spdk4/hugepage_info 00:35:31.649 Removing: /dev/shm/bdev_svc_trace.1 00:35:31.649 Removing: /dev/shm/nvmf_trace.0 00:35:31.649 Removing: /dev/shm/spdk_tgt_trace.pid993236 00:35:31.649 Removing: /var/run/dpdk/spdk0 00:35:31.649 Removing: /var/run/dpdk/spdk1 00:35:31.649 Removing: /var/run/dpdk/spdk2 00:35:31.649 Removing: /var/run/dpdk/spdk3 00:35:31.649 Removing: /var/run/dpdk/spdk4 00:35:31.649 Removing: /var/run/dpdk/spdk_pid1000224 00:35:31.649 Removing: /var/run/dpdk/spdk_pid1000630 00:35:31.649 Removing: /var/run/dpdk/spdk_pid1000893 00:35:31.649 Removing: /var/run/dpdk/spdk_pid1001145 00:35:31.649 Removing: /var/run/dpdk/spdk_pid1001397 00:35:31.649 Removing: /var/run/dpdk/spdk_pid1001682 00:35:31.649 Removing: /var/run/dpdk/spdk_pid1002411 00:35:31.649 Removing: /var/run/dpdk/spdk_pid1005413 00:35:31.649 Removing: /var/run/dpdk/spdk_pid1005673 00:35:31.649 Removing: /var/run/dpdk/spdk_pid1005880 00:35:31.649 Removing: /var/run/dpdk/spdk_pid1005931 00:35:31.649 Removing: /var/run/dpdk/spdk_pid1006425 00:35:31.649 Removing: /var/run/dpdk/spdk_pid1006444 00:35:31.649 Removing: /var/run/dpdk/spdk_pid1006932 00:35:31.649 Removing: /var/run/dpdk/spdk_pid1007042 00:35:31.649 Removing: /var/run/dpdk/spdk_pid1007418 00:35:31.649 Removing: /var/run/dpdk/spdk_pid1007426 00:35:31.649 Removing: /var/run/dpdk/spdk_pid1007688 00:35:31.649 Removing: /var/run/dpdk/spdk_pid1007698 00:35:31.649 Removing: /var/run/dpdk/spdk_pid1008263 00:35:31.649 Removing: /var/run/dpdk/spdk_pid1008519 00:35:31.649 Removing: /var/run/dpdk/spdk_pid1008815 00:35:31.649 Removing: /var/run/dpdk/spdk_pid1012561 00:35:31.649 Removing: /var/run/dpdk/spdk_pid1017017 00:35:31.649 Removing: /var/run/dpdk/spdk_pid1027282 00:35:31.649 Removing: /var/run/dpdk/spdk_pid1027778 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1032046 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1032505 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1036780 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1042668 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1045784 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1055989 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1064944 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1066770 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1067693 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1084784 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1088683 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1135073 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1140251 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1146435 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1153018 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1153026 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1153939 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1154851 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1155606 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1156231 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1156239 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1156472 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1156682 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1156705 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1157598 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1158320 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1159233 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1159881 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1159917 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1160153 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1161178 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1162162 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1170458 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1199825 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1204334 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1205933 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1207767 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1207937 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1208016 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1208251 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1208756 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1210582 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1211355 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1211851 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1213962 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1214448 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1215107 00:35:31.650 Removing: /var/run/dpdk/spdk_pid1219745 00:35:31.909 Removing: /var/run/dpdk/spdk_pid1225352 00:35:31.909 Removing: /var/run/dpdk/spdk_pid1225353 00:35:31.909 Removing: /var/run/dpdk/spdk_pid1225354 00:35:31.909 Removing: /var/run/dpdk/spdk_pid1229257 00:35:31.909 Removing: /var/run/dpdk/spdk_pid1237707 00:35:31.909 Removing: /var/run/dpdk/spdk_pid1241640 00:35:31.909 Removing: /var/run/dpdk/spdk_pid1247744 00:35:31.909 Removing: /var/run/dpdk/spdk_pid1249068 00:35:31.909 Removing: /var/run/dpdk/spdk_pid1250607 00:35:31.909 Removing: /var/run/dpdk/spdk_pid1251932 00:35:31.909 Removing: /var/run/dpdk/spdk_pid1256629 00:35:31.909 Removing: /var/run/dpdk/spdk_pid1260966 00:35:31.909 Removing: /var/run/dpdk/spdk_pid1264986 00:35:31.909 Removing: /var/run/dpdk/spdk_pid1272877 00:35:31.909 Removing: /var/run/dpdk/spdk_pid1272879 00:35:31.909 Removing: /var/run/dpdk/spdk_pid1277595 00:35:31.909 Removing: /var/run/dpdk/spdk_pid1277825 00:35:31.909 Removing: /var/run/dpdk/spdk_pid1278053 00:35:31.909 Removing: /var/run/dpdk/spdk_pid1278321 00:35:31.909 Removing: /var/run/dpdk/spdk_pid1278517 00:35:31.909 Removing: /var/run/dpdk/spdk_pid1283010 00:35:31.909 Removing: /var/run/dpdk/spdk_pid1283583 00:35:31.909 Removing: /var/run/dpdk/spdk_pid1287917 00:35:31.909 Removing: /var/run/dpdk/spdk_pid1290603 00:35:31.909 Removing: /var/run/dpdk/spdk_pid1295935 00:35:31.909 Removing: /var/run/dpdk/spdk_pid1301202 00:35:31.909 Removing: /var/run/dpdk/spdk_pid1309961 00:35:31.909 Removing: /var/run/dpdk/spdk_pid1317299 00:35:31.909 Removing: /var/run/dpdk/spdk_pid1317301 00:35:31.909 Removing: /var/run/dpdk/spdk_pid1336551 00:35:31.909 Removing: /var/run/dpdk/spdk_pid1337201 00:35:31.909 Removing: /var/run/dpdk/spdk_pid1337672 00:35:31.909 Removing: /var/run/dpdk/spdk_pid1338146 00:35:31.909 Removing: /var/run/dpdk/spdk_pid1338885 00:35:31.909 Removing: /var/run/dpdk/spdk_pid1339363 00:35:31.909 Removing: /var/run/dpdk/spdk_pid1340051 00:35:31.909 Removing: /var/run/dpdk/spdk_pid1340522 00:35:31.909 Removing: /var/run/dpdk/spdk_pid1344724 00:35:31.910 Removing: /var/run/dpdk/spdk_pid1345000 00:35:31.910 Removing: /var/run/dpdk/spdk_pid1350876 00:35:31.910 Removing: /var/run/dpdk/spdk_pid1351122 00:35:31.910 Removing: /var/run/dpdk/spdk_pid1356379 00:35:31.910 Removing: /var/run/dpdk/spdk_pid1360610 00:35:31.910 Removing: /var/run/dpdk/spdk_pid1370848 00:35:31.910 Removing: /var/run/dpdk/spdk_pid1371389 00:35:31.910 Removing: /var/run/dpdk/spdk_pid1375567 00:35:31.910 Removing: /var/run/dpdk/spdk_pid1375822 00:35:31.910 Removing: /var/run/dpdk/spdk_pid1380061 00:35:31.910 Removing: /var/run/dpdk/spdk_pid1385907 00:35:31.910 Removing: /var/run/dpdk/spdk_pid1388475 00:35:31.910 Removing: /var/run/dpdk/spdk_pid1398444 00:35:31.910 Removing: /var/run/dpdk/spdk_pid1407104 00:35:31.910 Removing: /var/run/dpdk/spdk_pid1408718 00:35:31.910 Removing: /var/run/dpdk/spdk_pid1409764 00:35:31.910 Removing: /var/run/dpdk/spdk_pid1426283 00:35:31.910 Removing: /var/run/dpdk/spdk_pid1430139 00:35:31.910 Removing: /var/run/dpdk/spdk_pid1432986 00:35:31.910 Removing: /var/run/dpdk/spdk_pid1440744 00:35:31.910 Removing: /var/run/dpdk/spdk_pid1440749 00:35:31.910 Removing: /var/run/dpdk/spdk_pid1445780 00:35:31.910 Removing: /var/run/dpdk/spdk_pid1447725 00:35:31.910 Removing: /var/run/dpdk/spdk_pid1449680 00:35:31.910 Removing: /var/run/dpdk/spdk_pid1450759 00:35:31.910 Removing: /var/run/dpdk/spdk_pid1452727 00:35:31.910 Removing: /var/run/dpdk/spdk_pid1454088 00:35:31.910 Removing: /var/run/dpdk/spdk_pid1463096 00:35:32.170 Removing: /var/run/dpdk/spdk_pid1463706 00:35:32.170 Removing: /var/run/dpdk/spdk_pid1464181 00:35:32.170 Removing: /var/run/dpdk/spdk_pid1466450 00:35:32.170 Removing: /var/run/dpdk/spdk_pid1466915 00:35:32.170 Removing: /var/run/dpdk/spdk_pid1467380 00:35:32.170 Removing: /var/run/dpdk/spdk_pid1471197 00:35:32.170 Removing: /var/run/dpdk/spdk_pid1471206 00:35:32.170 Removing: /var/run/dpdk/spdk_pid1472723 00:35:32.170 Removing: /var/run/dpdk/spdk_pid1473280 00:35:32.170 Removing: /var/run/dpdk/spdk_pid1473291 00:35:32.170 Removing: /var/run/dpdk/spdk_pid991081 00:35:32.170 Removing: /var/run/dpdk/spdk_pid992151 00:35:32.170 Removing: /var/run/dpdk/spdk_pid993236 00:35:32.170 Removing: /var/run/dpdk/spdk_pid993873 00:35:32.170 Removing: /var/run/dpdk/spdk_pid994825 00:35:32.170 Removing: /var/run/dpdk/spdk_pid995054 00:35:32.170 Removing: /var/run/dpdk/spdk_pid996031 00:35:32.170 Removing: /var/run/dpdk/spdk_pid996040 00:35:32.170 Removing: /var/run/dpdk/spdk_pid996392 00:35:32.170 Removing: /var/run/dpdk/spdk_pid997914 00:35:32.170 Removing: /var/run/dpdk/spdk_pid999452 00:35:32.170 Removing: /var/run/dpdk/spdk_pid999830 00:35:32.170 Clean 00:35:32.170 07:31:36 -- common/autotest_common.sh@1451 -- # return 0 00:35:32.170 07:31:36 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:35:32.170 07:31:36 -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:32.170 07:31:36 -- common/autotest_common.sh@10 -- # set +x 00:35:32.170 07:31:36 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:35:32.170 07:31:36 -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:32.170 07:31:36 -- common/autotest_common.sh@10 -- # set +x 00:35:32.170 07:31:36 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:32.170 07:31:36 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:35:32.170 07:31:36 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:35:32.170 07:31:36 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:35:32.170 07:31:36 -- spdk/autotest.sh@394 -- # hostname 00:35:32.170 07:31:36 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:35:32.429 geninfo: WARNING: invalid characters removed from testname! 00:35:54.373 07:31:57 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:55.751 07:32:00 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:57.658 07:32:02 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:59.563 07:32:03 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:01.469 07:32:05 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:03.375 07:32:07 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:05.283 07:32:09 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:05.283 07:32:09 -- spdk/autorun.sh@1 -- $ timing_finish 00:36:05.283 07:32:09 -- common/autotest_common.sh@736 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:36:05.283 07:32:09 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:05.283 07:32:09 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:05.283 07:32:09 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:05.283 + [[ -n 914043 ]] 00:36:05.283 + sudo kill 914043 00:36:05.294 [Pipeline] } 00:36:05.311 [Pipeline] // stage 00:36:05.316 [Pipeline] } 00:36:05.331 [Pipeline] // timeout 00:36:05.336 [Pipeline] } 00:36:05.350 [Pipeline] // catchError 00:36:05.355 [Pipeline] } 00:36:05.371 [Pipeline] // wrap 00:36:05.379 [Pipeline] } 00:36:05.393 [Pipeline] // catchError 00:36:05.402 [Pipeline] stage 00:36:05.404 [Pipeline] { (Epilogue) 00:36:05.417 [Pipeline] catchError 00:36:05.418 [Pipeline] { 00:36:05.432 [Pipeline] echo 00:36:05.434 Cleanup processes 00:36:05.439 [Pipeline] sh 00:36:05.724 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:05.724 1483954 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:05.739 [Pipeline] sh 00:36:06.025 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:06.025 ++ grep -v 'sudo pgrep' 00:36:06.025 ++ awk '{print $1}' 00:36:06.025 + sudo kill -9 00:36:06.025 + true 00:36:06.095 [Pipeline] sh 00:36:06.511 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:18.740 [Pipeline] sh 00:36:19.026 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:19.026 Artifacts sizes are good 00:36:19.042 [Pipeline] archiveArtifacts 00:36:19.051 Archiving artifacts 00:36:19.183 [Pipeline] sh 00:36:19.469 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:36:19.484 [Pipeline] cleanWs 00:36:19.494 [WS-CLEANUP] Deleting project workspace... 00:36:19.494 [WS-CLEANUP] Deferred wipeout is used... 00:36:19.501 [WS-CLEANUP] done 00:36:19.504 [Pipeline] } 00:36:19.520 [Pipeline] // catchError 00:36:19.531 [Pipeline] sh 00:36:19.845 + logger -p user.info -t JENKINS-CI 00:36:19.855 [Pipeline] } 00:36:19.869 [Pipeline] // stage 00:36:19.876 [Pipeline] } 00:36:19.891 [Pipeline] // node 00:36:19.897 [Pipeline] End of Pipeline 00:36:19.939 Finished: SUCCESS